Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventWorkshop

Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity?

Status
Past
Date
Tuesday, October 22, 2024 9:00 AM - 5:30 PM PST/PDT
Location
Cecil H. Green Library
Topics
Privacy, Safety, Security

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.

Share
Link copied to clipboard!
Event Contact
HAI Events Team
stanford-hai@stanford.edu

Related Events

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition
SeminarMay 27, 202612:00 PM - 1:15 PM
May
27
2026

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Seminar

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition

May 27, 202612:00 PM - 1:15 PM

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

AI+Science: Accelerating Discovery
ConferenceMay 05, 20268:30 AM - 6:45 PM
May
05
2026

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery.

Conference

AI+Science: Accelerating Discovery

May 05, 20268:30 AM - 6:45 PM

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery.

Wolfgang Lehrach | Code World Models for General Game Playing
SeminarMay 13, 202612:00 PM - 1:15 PM
May
13
2026

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

Seminar

Wolfgang Lehrach | Code World Models for General Game Playing

May 13, 202612:00 PM - 1:15 PM

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

The internet is at an inflection point. With the growth of mis/disinformation, artificial intelligence and synthetic media, trust in information faces unprecedented threats. At the same time, new technologies – referred to as “Web 3” – present opportunities to protect the integrity of data. Rapid advances in cryptography hold the promise of allowing users to establish the provenance and veracity of information and restore trust in digital content. Can these solutions be applied to investigative journalism, historical archiving, or the admissibility of legal evidence? 

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems. What comes next for technologists and practitioners in Journalism, Law and Archiving? 

Participants will have an opportunity to not only address discipline-specific issues but also identify cross-over opportunities that address AI’s growing societal impact, technical advancements, public perceptions and geopolitical dynamics. You will come away with a clear understanding of what’s at stake and how each discipline might design for authenticity – separately and/or collaboratively.

This is an invitation-only event with limited seating. If you are a Stanford affiliate and interested in attending, please reach out to stanford-hai@stanford.edu. 

View Agenda at starlinglab.org

Event Cohosts

This event is being co-hosted by the Starling Lab for Data Integrity and the Stanford Institute for Human-Centered AI (HAI).

Event Organizers
Ann Grimes
Social Science Research Scholar; Director / Journalism, Starling Lab for Data Integrity
headshot
Patrick Hynes
Senior Manager of Research Communities
Vanessa Parli
Managing Director of Programs and External Engagement
Adam Rose
Chief Operating Officer, Starling Lab for Data Integrity