Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventWorkshop

Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity?

Status
Past
Date
Tuesday, October 22, 2024 9:00 AM - 5:30 PM PST/PDT
Location
Cecil H. Green Library
Topics
Privacy, Safety, Security

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.

Share
Link copied to clipboard!
Event Contact
HAI Events Team
stanford-hai@stanford.edu

Related Events

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition
SeminarMay 27, 202612:00 PM - 1:15 PM
May
27
2026

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Seminar

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition

May 27, 202612:00 PM - 1:15 PM

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Arvind Narayanan | Adapting to the Transformation of Knowledge Work
May 18, 202612:00 PM - 1:00 PM
May
18
2026

The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.

Event

Arvind Narayanan | Adapting to the Transformation of Knowledge Work

May 18, 202612:00 PM - 1:00 PM

The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.

Inside the 2026 AI Index Report | Stanford HAI
SeminarMay 20, 202612:00 PM - 1:15 PM
May
20
2026

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Seminar

Inside the 2026 AI Index Report | Stanford HAI

May 20, 202612:00 PM - 1:15 PM

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

The internet is at an inflection point. With the growth of mis/disinformation, artificial intelligence and synthetic media, trust in information faces unprecedented threats. At the same time, new technologies – referred to as “Web 3” – present opportunities to protect the integrity of data. Rapid advances in cryptography hold the promise of allowing users to establish the provenance and veracity of information and restore trust in digital content. Can these solutions be applied to investigative journalism, historical archiving, or the admissibility of legal evidence? 

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems. What comes next for technologists and practitioners in Journalism, Law and Archiving? 

Participants will have an opportunity to not only address discipline-specific issues but also identify cross-over opportunities that address AI’s growing societal impact, technical advancements, public perceptions and geopolitical dynamics. You will come away with a clear understanding of what’s at stake and how each discipline might design for authenticity – separately and/or collaboratively.

This is an invitation-only event with limited seating. If you are a Stanford affiliate and interested in attending, please reach out to stanford-hai@stanford.edu. 

View Agenda at starlinglab.org

Event Cohosts

This event is being co-hosted by the Starling Lab for Data Integrity and the Stanford Institute for Human-Centered AI (HAI).

Event Organizers
Ann Grimes
Social Science Research Scholar; Director / Journalism, Starling Lab for Data Integrity
headshot
Patrick Hynes
Senior Manager of Research Communities
Vanessa Parli
Managing Director of Programs and External Engagement
Adam Rose
Chief Operating Officer, Starling Lab for Data Integrity