Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Sciences (Social, Health, Biological, Physical) | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Back to Sciences (Social, Health, Biological, Physical)

All Work Published on Sciences (Social, Health, Biological, Physical)

Stanford Research Teams Receive New Hoffman-Yee Grant Funding for 2025
Nikki Goth Itoi
Dec 09, 2025
News

Five teams will use the funding to advance their work in biology, generative AI and creativity, policing, and more.

Stanford Research Teams Receive New Hoffman-Yee Grant Funding for 2025

Nikki Goth Itoi
Dec 09, 2025

Five teams will use the funding to advance their work in biology, generative AI and creativity, policing, and more.

Arts, Humanities
Ethics, Equity, Inclusion
Foundation Models
Generative AI
Healthcare
Sciences (Social, Health, Biological, Physical)
News
Measuring receptivity to misinformation at scale on a social media platform
Christopher K Tokita, Kevin Aslett, William P Godel, Zeve Sanderson, Joshua A Tucker, Jonathan Nagler, Nathaniel Persily, Richard Bonneau
Sep 10, 2024
Research

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

Measuring receptivity to misinformation at scale on a social media platform

Christopher K Tokita, Kevin Aslett, William P Godel, Zeve Sanderson, Joshua A Tucker, Jonathan Nagler, Nathaniel Persily, Richard Bonneau
Sep 10, 2024

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

Communications, Media
Sciences (Social, Health, Biological, Physical)
Research
Response to NSF’s Request for Information on Research Ethics
Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, Michael S. Bernstein, Margaret Levi, David Magnus, Debra Satz
Nov 22, 2024
Response to Request

Stanford scholars respond to a federal RFI related to research ethics, sharing lessons from their experience operating an ethical reflection process for research grants.

Response to NSF’s Request for Information on Research Ethics

Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, Michael S. Bernstein, Margaret Levi, David Magnus, Debra Satz
Nov 22, 2024

Stanford scholars respond to a federal RFI related to research ethics, sharing lessons from their experience operating an ethical reflection process for research grants.

Ethics, Equity, Inclusion
Sciences (Social, Health, Biological, Physical)
Response to Request
Meg Cychosz
Assistant Professor of Linguistics
Person

Meg Cychosz

Assistant Professor of Linguistics
Ethics, Equity, Inclusion
Communications, Media
Human Reasoning
Machine Learning
Sciences (Social, Health, Biological, Physical)
Person
"Steampunk" Self-Learning Mechanical Circuits That Adapt to Their Environments
Andrew Myers
Nov 24, 2025
News

Researchers at Stanford have invented a new type of self-powered mechanical circuits that learn. It could lead to new purely mechanical machines that understand and adapt to the changing world around them.

"Steampunk" Self-Learning Mechanical Circuits That Adapt to Their Environments

Andrew Myers
Nov 24, 2025

Researchers at Stanford have invented a new type of self-powered mechanical circuits that learn. It could lead to new purely mechanical machines that understand and adapt to the changing world around them.

Automation
Industry, Innovation
Sciences (Social, Health, Biological, Physical)
News
Internal Fractures: The Competing Logics of Social Media Platforms
Angèle Christin, Michael S. Bernstein, Jeffrey Hancock, Chenyan Jia, Jeanne Tsai, Chunchen Xu
Aug 21, 2024
Research
Your browser does not support the video tag.

Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.

Internal Fractures: The Competing Logics of Social Media Platforms

Angèle Christin, Michael S. Bernstein, Jeffrey Hancock, Chenyan Jia, Jeanne Tsai, Chunchen Xu
Aug 21, 2024

Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.

Sciences (Social, Health, Biological, Physical)
Communications, Media
Your browser does not support the video tag.
Research
1
2
3
4
5