Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity? | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventWorkshop

Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity?

Status
Past
Date
Tuesday, October 22, 2024 9:00 AM - 5:30 PM PST/PDT
Location
Cecil H. Green Library
Topics
Privacy, Safety, Security

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.

Share
Link copied to clipboard!
Event Contact
HAI Events Team
stanford-hai@stanford.edu

Related Events

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition
SeminarMay 27, 202612:00 PM - 1:15 PM
May
27
2026

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Seminar

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition

May 27, 202612:00 PM - 1:15 PM

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Suproteem Sarkar | AI Agents and Higher-Order Work
Apr 06, 202612:00 PM - 1:00 PM
April
06
2026

How do AI agents influence knowledge work? This paper finds that agents shift worker effort from implementation to supervision, which especially benefits verifiable work and expert workers. I use data from the coding platform Cursor to study agents in software production.

Event

Suproteem Sarkar | AI Agents and Higher-Order Work

Apr 06, 202612:00 PM - 1:00 PM

How do AI agents influence knowledge work? This paper finds that agents shift worker effort from implementation to supervision, which especially benefits verifiable work and expert workers. I use data from the coding platform Cursor to study agents in software production.

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?
SeminarApr 15, 202612:00 PM - 1:15 PM
April
15
2026

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Seminar

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?

Apr 15, 202612:00 PM - 1:15 PM

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

The internet is at an inflection point. With the growth of mis/disinformation, artificial intelligence and synthetic media, trust in information faces unprecedented threats. At the same time, new technologies – referred to as “Web 3” – present opportunities to protect the integrity of data. Rapid advances in cryptography hold the promise of allowing users to establish the provenance and veracity of information and restore trust in digital content. Can these solutions be applied to investigative journalism, historical archiving, or the admissibility of legal evidence? 

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems. What comes next for technologists and practitioners in Journalism, Law and Archiving? 

Participants will have an opportunity to not only address discipline-specific issues but also identify cross-over opportunities that address AI’s growing societal impact, technical advancements, public perceptions and geopolitical dynamics. You will come away with a clear understanding of what’s at stake and how each discipline might design for authenticity – separately and/or collaboratively.

This is an invitation-only event with limited seating. If you are a Stanford affiliate and interested in attending, please reach out to stanford-hai@stanford.edu. 

View Agenda at starlinglab.org

Event Cohosts

This event is being co-hosted by the Starling Lab for Data Integrity and the Stanford Institute for Human-Centered AI (HAI).

Event Organizers
Ann Grimes
Social Science Research Scholar; Director / Journalism, Starling Lab for Data Integrity
headshot
Patrick Hynes
Senior Manager of Research Communities
Vanessa Parli
Managing Director of Programs and External Engagement
Adam Rose
Chief Operating Officer, Starling Lab for Data Integrity