Democracy | Stanford HAI
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Democracy

How responsible use and strong regulation of AI can help us strengthen rather than undermine democracy.

Sandy Pentland: AI Should Nurture Communities
Dylan Walsh
Nov 17, 2025
News

In his new book, Shared Wisdom, the scholar outlines the limits of today’s political and social structures, which he considers caught in historical ruts, and discusses how AI might help to rebuild a flourishing community.

News

Sandy Pentland: AI Should Nurture Communities

Dylan Walsh
DemocracyGovernment, Public AdministrationNov 17

In his new book, Shared Wisdom, the scholar outlines the limits of today’s political and social structures, which he considers caught in historical ruts, and discusses how AI might help to rebuild a flourishing community.

The Global AI Vibrancy Tool 2025
Loredana Fattorini, Nestor Maslej, Ray Perrault, Vanessa Parli, John Etchemendy, Yoav Shoham, Katrina Ligett
Deep DiveNov 24, 2025
Research
Your browser does not support the video tag.

This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.

Research
Your browser does not support the video tag.

The Global AI Vibrancy Tool 2025

Loredana Fattorini, Nestor Maslej, Ray Perrault, Vanessa Parli, John Etchemendy, Yoav Shoham, Katrina Ligett
DemocracyIndustry, InnovationGovernment, Public AdministrationDeep DiveNov 24

This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.

Toward Political Neutrality in AI
Jillian Fisher, Ruth E. Appel, Yulia Tsvetkov, Margaret E. Roberts, Jennifer Pan, Dawn Song, Yejin Choi
Quick ReadSep 10, 2025
Policy Brief

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

Policy Brief

Toward Political Neutrality in AI

Jillian Fisher, Ruth E. Appel, Yulia Tsvetkov, Margaret E. Roberts, Jennifer Pan, Dawn Song, Yejin Choi
DemocracyGenerative AIQuick ReadSep 10

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

Daniel E. Ho
Person
Dan Ho headshot
Person
Dan Ho headshot

Daniel E. Ho

DemocracyGovernment, Public AdministrationLaw Enforcement and JusticeRegulation, Policy, GovernanceOct 05
AI Action Summit in Paris Highlights A Shifting Policy Landscape
Shana Lynch
Feb 27, 2025
News

Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.

News

AI Action Summit in Paris Highlights A Shifting Policy Landscape

Shana Lynch
DemocracyRegulation, Policy, GovernancePrivacy, Safety, SecurityFeb 27

Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.

Embedding Democratic Values into Social Media AIs via Societal Objective Functions
Chenyan Jia, Michelle Lam, Michael S. Bernstein, Minh Chau Mai
Apr 26, 2024
Research
Your browser does not support the video tag.

Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.

Research
Your browser does not support the video tag.

Embedding Democratic Values into Social Media AIs via Societal Objective Functions

Chenyan Jia, Michelle Lam, Michael S. Bernstein, Minh Chau Mai
DemocracyApr 26

Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.

All Work Published on Democracy

Empowering Policymakers: Stanford HAI Trains Public Sector at Every Level
Nikki Goth Itoi
Jan 16, 2025
News

Stanford HAI has built a major portfolio of education opportunities for state, federal, and international policy leaders to strengthen AI governance.

Empowering Policymakers: Stanford HAI Trains Public Sector at Every Level

Nikki Goth Itoi
Jan 16, 2025

Stanford HAI has built a major portfolio of education opportunities for state, federal, and international policy leaders to strengthen AI governance.

Democracy
Government, Public Administration
International Affairs, International Security, International Development
Regulation, Policy, Governance
News
How Persuasive is AI-Generated Propaganda?
Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, Michael Tomz
Quick ReadSep 03, 2024
Policy Brief

This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.

How Persuasive is AI-Generated Propaganda?

Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, Michael Tomz
Quick ReadSep 03, 2024

This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.

Democracy
Foundation Models
Policy Brief
Erik Brynjolfsson
Jerry Yang and Akiko Yamazaki Professor | Senior Fellow, Stanford HAI | Senior Fellow, SIEPR | Professor, by courtesy, of Economics; of Operations, Information & Technology; and of Economics at the Stanford Graduate School of Business
Person

Erik Brynjolfsson

Jerry Yang and Akiko Yamazaki Professor | Senior Fellow, Stanford HAI | Senior Fellow, SIEPR | Professor, by courtesy, of Economics; of Operations, Information & Technology; and of Economics at the Stanford Graduate School of Business
Democracy
Economy, Markets
Workforce, Labor
Person
Tech Ethics & Policy: Stanford HAI’s AI Fellowship Program Connects Students with Roles in Public Service
Beth Jensen
Oct 30, 2024
News

Three students share their experiences working at the forefront of technology regulation and policy in Washington, D.C.

Tech Ethics & Policy: Stanford HAI’s AI Fellowship Program Connects Students with Roles in Public Service

Beth Jensen
Oct 30, 2024

Three students share their experiences working at the forefront of technology regulation and policy in Washington, D.C.

Democracy
Government, Public Administration
Regulation, Policy, Governance
News
Technology and Election 2020 Issue Brief Series
Rob Reich, Marietje Schaake
Quick ReadNov 01, 2020
Issue Brief

This issue brief series examines how technology will impact public debate, affect the electoral process, and may even determine the election outcome.

Technology and Election 2020 Issue Brief Series

Rob Reich, Marietje Schaake
Quick ReadNov 01, 2020

This issue brief series examines how technology will impact public debate, affect the electoral process, and may even determine the election outcome.

Regulation, Policy, Governance
Democracy
Issue Brief
The Tech Coup: A New Book Shows How the Unchecked Power of Companies Is Destabilizing Governance
Katharine Miller
Oct 07, 2024
News

In The Tech Coup: How to Save Democracy from Silicon Valley, Marietje Schaake, a Stanford HAI Policy Fellow, reveals how tech companies are encroaching on governmental roles, posing a threat to the democratic rule of law. 

The Tech Coup: A New Book Shows How the Unchecked Power of Companies Is Destabilizing Governance

Katharine Miller
Oct 07, 2024

In The Tech Coup: How to Save Democracy from Silicon Valley, Marietje Schaake, a Stanford HAI Policy Fellow, reveals how tech companies are encroaching on governmental roles, posing a threat to the democratic rule of law. 

Democracy
Economy, Markets
Energy, Environment
Regulation, Policy, Governance
News
1
2