Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity? | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventWorkshop

Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity?

Status
Past
Date
Tuesday, October 22, 2024 9:00 AM - 5:30 PM PST/PDT
Location
Cecil H. Green Library
Topics
Privacy, Safety, Security

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.

Share
Link copied to clipboard!
Event Contact
HAI Events Team
stanford-hai@stanford.edu

Related Events

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition
SeminarMay 27, 202612:00 PM - 1:15 PM
May
27
2026

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Seminar

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition

May 27, 202612:00 PM - 1:15 PM

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Joel Becker | Reconciling Impressive AI Benchmark Performance with Limited Developer Productivity Impacts
Mar 16, 202612:00 PM - 1:00 PM
March
16
2026

AI coding agents now complete multi-hour coding benchmarks with roughly 50% reliability, yet a randomized trial found experienced open-source developers took about 19% longer when allowed frontier AI tools than when tools were disallowed...

Event

Joel Becker | Reconciling Impressive AI Benchmark Performance with Limited Developer Productivity Impacts

Mar 16, 202612:00 PM - 1:00 PM

AI coding agents now complete multi-hour coding benchmarks with roughly 50% reliability, yet a randomized trial found experienced open-source developers took about 19% longer when allowed frontier AI tools than when tools were disallowed...

Dan Iancu & Antonio Skillicorn | Interpretable Machine Learning and Mixed Datasets for Predicting Child Labor in Ghana’s Cocoa Sector
SeminarMar 18, 202612:00 PM - 1:15 PM
March
18
2026

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

Seminar

Dan Iancu & Antonio Skillicorn | Interpretable Machine Learning and Mixed Datasets for Predicting Child Labor in Ghana’s Cocoa Sector

Mar 18, 202612:00 PM - 1:15 PM

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

The internet is at an inflection point. With the growth of mis/disinformation, artificial intelligence and synthetic media, trust in information faces unprecedented threats. At the same time, new technologies – referred to as “Web 3” – present opportunities to protect the integrity of data. Rapid advances in cryptography hold the promise of allowing users to establish the provenance and veracity of information and restore trust in digital content. Can these solutions be applied to investigative journalism, historical archiving, or the admissibility of legal evidence? 

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems. What comes next for technologists and practitioners in Journalism, Law and Archiving? 

Participants will have an opportunity to not only address discipline-specific issues but also identify cross-over opportunities that address AI’s growing societal impact, technical advancements, public perceptions and geopolitical dynamics. You will come away with a clear understanding of what’s at stake and how each discipline might design for authenticity – separately and/or collaboratively.

This is an invitation-only event with limited seating. If you are a Stanford affiliate and interested in attending, please reach out to stanford-hai@stanford.edu. 

View Agenda at starlinglab.org

Event Cohosts

This event is being co-hosted by the Starling Lab for Data Integrity and the Stanford Institute for Human-Centered AI (HAI).

Event Organizers
Ann Grimes
Social Science Research Scholar; Director / Journalism, Starling Lab for Data Integrity
headshot
Patrick Hynes
Senior Manager of Research Communities
Vanessa Parli
Managing Director of Programs and External Engagement
Adam Rose
Chief Operating Officer, Starling Lab for Data Integrity