Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Michael S. Bernstein | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peopleFaculty,Senior Fellow

Michael S. Bernstein

Associate Professor of Computer Science | Senior Fellow, HAI | STMicroelectronics Faculty Scholar, Stanford University

External Bio

Michael Bernstein is an Associate Professor of Computer Science and STMicroelectronics Faculty Scholar at Stanford University, where he is a member of the Human-Computer Interaction Group. His research focuses on the design of social computing systems. This research has won best paper awards at top conferences in human-computer interaction, including CHI, CSCW, ICWSM, and UIST, and has been reported in venues such as The New York Times, New Scientist, Wired, and The Guardian. Michael has been recognized with an Alfred P. Sloan Fellowship, UIST Lasting Impact Award, and the Patrick J. McGovern Tech for Humanity Prize. He holds a bachelor's degree in Symbolic Systems from Stanford University, as well as a master's degree and a Ph.D. in Computer Science from MIT.

Share
Link copied to clipboard!

Latest Related to Michael S. Bernstein

news
network connecting the human dots icon in business project management

Flash Teams: The Future of Agile Collaboration

Andrew Myers
Workforce, LaborIndustry, InnovationOct 01

Stanford professors Melissa Valentine and Michael Bernstein unveil a new model for work organizations, highlighting a dynamic approach to assembling global teams of experts for on-demand projects.

seminar
Your browser does not support the video tag.

Melissa Valentine and Michael Bernstein | Flash Teams: Leading the Future of AI-Enhanced, On-Demand Work

Oct 08, 20253:00 PM - 4:15 PM

In Flash Teams, award-winning management scholar Melissa Valentine and computer scientist Michael Bernstein chart the opportunities of flash teams and navigate the challenges that teams and managers will face.

policy brief

Simulating Human Behavior with AI Agents

Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie J. Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, Michael S. Bernstein
Generative AIQuick ReadMay 20

This brief introduces a generative AI agent architecture that can simulate the attitudes of more than 1,000 real people in response to major social science survey questions.

All Related

Response to NSF’s Request for Information on Research Ethics
Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, Michael S. Bernstein, Margaret Levi, David Magnus, Debra Satz
Nov 22, 2024
response to request

Stanford scholars respond to a federal RFI related to research ethics, sharing lessons from their experience operating an ethical reflection process for research grants.

Response to NSF’s Request for Information on Research Ethics

Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, Michael S. Bernstein, Margaret Levi, David Magnus, Debra Satz
Nov 22, 2024

Stanford scholars respond to a federal RFI related to research ethics, sharing lessons from their experience operating an ethical reflection process for research grants.

Ethics, Equity, Inclusion
Sciences (Social, Health, Biological, Physical)
response to request
Internal Fractures: The Competing Logics of Social Media Platforms
Angèle Christin, Michael S. Bernstein, Jeffrey Hancock, Chenyan Jia, Jeanne Tsai, Chunchen Xu
Aug 21, 2024
Research
Your browser does not support the video tag.

Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.

Internal Fractures: The Competing Logics of Social Media Platforms

Angèle Christin, Michael S. Bernstein, Jeffrey Hancock, Chenyan Jia, Jeanne Tsai, Chunchen Xu
Aug 21, 2024

Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.

Sciences (Social, Health, Biological, Physical)
Communications, Media
Your browser does not support the video tag.
Research
Embedding Democratic Values into Social Media AIs via Societal Objective Functions
Chenyan Jia, Michelle Lam, Michael S. Bernstein, Minh Chau Mai
Apr 26, 2024
Research
Your browser does not support the video tag.

Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.

Embedding Democratic Values into Social Media AIs via Societal Objective Functions

Chenyan Jia, Michelle Lam, Michael S. Bernstein, Minh Chau Mai
Apr 26, 2024

Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.

Democracy
Your browser does not support the video tag.
Research
Algorithms and the Perceived Legitimacy of Content Moderation
Christina A. Pan, Sahil Yakhmi, Tara Iyer, Evan Strasnick, Amy X. Zhang, Michael S. Bernstein
Quick ReadDec 15, 2022
policy brief

This brief explores people’s views of Facebook’s content moderation processes, providing a pathway for better online speech platforms and improving content moderation processes.

Algorithms and the Perceived Legitimacy of Content Moderation

Christina A. Pan, Sahil Yakhmi, Tara Iyer, Evan Strasnick, Amy X. Zhang, Michael S. Bernstein
Quick ReadDec 15, 2022

This brief explores people’s views of Facebook’s content moderation processes, providing a pathway for better online speech platforms and improving content moderation processes.

Privacy, Safety, Security
policy brief