Sarah E. Kreps | The Promise and Perils of AI-Mediated Political Communication | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

eventSeminar

Sarah E. Kreps | The Promise and Perils of AI-Mediated Political Communication

Status
Past
Date
Wednesday, May 17, 2023 10:00 AM - 11:00 AM PST/PDT
Location
Hybrid

Smart replies, writing enhancements, and virtual assistants powered by artificial intelligence language technologies are increasingly being integrated into consumer products and everyday experiences.

Share
Link copied to clipboard!
Event Contact
Madeleine Wright
mwright7@stanford.edu

Related Events

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata
Apr 13, 202612:00 PM - 1:00 PM
April
13
2026

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.

Event

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata

Apr 13, 202612:00 PM - 1:00 PM

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?
SeminarApr 15, 202612:00 PM - 1:15 PM
April
15
2026

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Seminar

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?

Apr 15, 202612:00 PM - 1:15 PM

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Matt Beane | Precision Proactivity: Measuring Cognitive Load in Real-World AI-Assisted Work
Apr 20, 202612:00 PM - 1:00 PM
April
20
2026

Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.

Event

Matt Beane | Precision Proactivity: Measuring Cognitive Load in Real-World AI-Assisted Work

Apr 20, 202612:00 PM - 1:00 PM

Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.

This research explores both the potential and risks of AI-mediated communication (AI-MC) technologies such as GPT-4—specifically in the political sphere—through a series of experiments designed to assess the possible uses and misuses of AI-MC. The initial part of the research evaluates whether human-AI collaboration can increase legislator responsiveness by studying citizen responses to AI-generated tweets and email correspondence. The findings point to the importance of disclosure, transparency, and human-in-the-loop accountability for AI-mediated political communication. The research then turns to the plausibility of misuse by actors seeking to influence the democratic process. It shares results from a field experiment on legislators, highlights the challenge these technologies present to democratic representation, and suggests techniques elected officials might employ to guard against AI-sourced astroturfing.

Speaker
Sarah E. Kreps
John L. Wetherill Professor in the Department of Government, Adjunct Professor of Law; Director of the Cornell Tech Policy Institute, Cornell University

Watch Event Recording