Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Sciences (Social, Health, Biological, Physical)

All Work Published on Sciences (Social, Health, Biological, Physical)

Joshua Salomon
Professor of Health Policy in the Department of Health Policy at Stanford School of Medicine, Senior Fellow in the Freeman Spogli Institute for International Studies, and founding Director of the Prevention Policy Modeling Lab
Person

Joshua Salomon

Professor of Health Policy in the Department of Health Policy at Stanford School of Medicine, Senior Fellow in the Freeman Spogli Institute for International Studies, and founding Director of the Prevention Policy Modeling Lab
Machine Learning
Sciences (Social, Health, Biological, Physical)
Person
From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems
Nikki Goth Itoi
Feb 27, 2026
News

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

Nikki Goth Itoi
Feb 27, 2026

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

Generative AI
Healthcare
Privacy, Safety, Security
Computer Vision
Sciences (Social, Health, Biological, Physical)
News
Measuring receptivity to misinformation at scale on a social media platform
Christopher K Tokita, Kevin Aslett, William P Godel, Zeve Sanderson, Joshua A Tucker, Jonathan Nagler, Nathaniel Persily, Richard Bonneau
Sep 10, 2024
Research

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

Measuring receptivity to misinformation at scale on a social media platform

Christopher K Tokita, Kevin Aslett, William P Godel, Zeve Sanderson, Joshua A Tucker, Jonathan Nagler, Nathaniel Persily, Richard Bonneau
Sep 10, 2024

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

Communications, Media
Sciences (Social, Health, Biological, Physical)
Research
Closed
HAI and Wu Tsai Neuro Partnership Grant

Stanford HAI and the Wu Tsai Neurosciences Institute jointly seek proposals that transform our understanding of the human brain using AI and advance the development of intelligent technology.

HAI and Wu Tsai Neuro Partnership Grant

Closed

Stanford HAI and the Wu Tsai Neurosciences Institute jointly seek proposals that transform our understanding of the human brain using AI and advance the development of intelligent technology.

Response to NSF’s Request for Information on Research Ethics
Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, Michael S. Bernstein, Margaret Levi, David Magnus, Debra Satz
Nov 22, 2024
Response to Request

Stanford scholars respond to a federal RFI related to research ethics, sharing lessons from their experience operating an ethical reflection process for research grants.

Response to NSF’s Request for Information on Research Ethics

Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, Michael S. Bernstein, Margaret Levi, David Magnus, Debra Satz
Nov 22, 2024

Stanford scholars respond to a federal RFI related to research ethics, sharing lessons from their experience operating an ethical reflection process for research grants.

Ethics, Equity, Inclusion
Sciences (Social, Health, Biological, Physical)
Response to Request
Justin Sonnenburg
Alex and Susie Algard Endowed Professor
Person

Justin Sonnenburg

Alex and Susie Algard Endowed Professor
Sciences (Social, Health, Biological, Physical)
Machine Learning
Person
1
2
3
4
5