Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Algorithms and the Perceived Legitimacy of Content Moderation | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Algorithms and the Perceived Legitimacy of Content Moderation

Date
December 15, 2022
Topics
Privacy, Safety, Security
Read Paper
abstract

This brief explores people’s views of Facebook’s content moderation processes, providing a pathway for better online speech platforms and improving content moderation processes.

Key Takeaways

  • Policymakers play an important role in shaping the future of online speech and content moderation, but so does the public. Understanding people’s perceptions of content moderation legitimacy—such as concerns about algorithmic fairness and individual moderator’s political and personal biases—is essential to designing better online platforms and improving online content moderation.

  • We conducted a survey on people’s views of Facebook’s content moderation processes and found that participants perceive expert panels as a more legitimate content moderation process than paid contractors, algorithmic decision-making, or digital juries.

  • Responses from participants also showed a clear distinction between impartiality and perceived legitimacy of moderation processes. Although participants considered algorithms the most impartial process, algorithms had lower perceived legitimacy than expert panels.

Executive Summary

Social media platforms are no strangers to criticism, especially with respect to their content moderation policies. More speech is taking place online. Simultaneously, online and offline speech are becoming increasingly entangled. As these trends continue, social media platforms' content moderation policies become ever more important. How companies design their algorithms and determine what speech should and should not be removed are just some of the decisions that impact users, the platform, and how the platform is perceived by policymakers and the public.

The public perception of legitimacy is important. Research in other fields underscores that institutions often depend, at least partly, on people accepting their authority. For courts, whether or not citizens buy into a court's ruling can impact how many people will respect the law, adhere to it, and trust the court system to operate effectively. The same idea of legitimacy applies to social media companies. If their content moderation processes—from human reviews to algorithmic flagging—are not perceived as legitimate, it will impact how users and policymakers view and engage with the platform. It can also shape whether users believe they must follow platform rules.

In our paper, "Comparing the Perceived Legitimacy of Content Moderation Processes," we dive into this problem by surveying people's views of Facebook's content moderation processes. We presented U.S. Facebook users with content moderation decisions and randomized the description of whether paid contractors, algorithms, expert panels, or juries of users made those decisions. Their responses, given this information, provide a window into how individuals perceive the legitimacy of moderation decisions. 

We also studied whether the decision itself—and respondents’ agreement with it—shaped their answers. The more social media companies’ content moderation policies shape popular discourse, and the more algorithms play a role in that moderation, the more essential it is to understand how to make those content moderation processes as legitimate as possible.

Read Paper
Share
Link copied to clipboard!
Authors
  • Christina A. Pan
    Christina A. Pan
  • Sahil Yakhmi
    Sahil Yakhmi
  • Tara Iyer
    Tara Iyer
  • Evan Strasnick
    Evan Strasnick
  • Amy X. Zhang
    Amy X. Zhang
  • Michael S. Bernstein
    Michael S. Bernstein

Related Publications

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee
Jennifer King
Quick ReadNov 18, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Testimony

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee

Jennifer King
Privacy, Safety, SecurityQuick ReadNov 18

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Validating Claims About AI: A Policymaker’s Guide
Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Quick ReadSep 24, 2025
Policy Brief

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Policy Brief

Validating Claims About AI: A Policymaker’s Guide

Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Foundation ModelsPrivacy, Safety, SecurityQuick ReadSep 24

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy
Riana Pfefferkorn
Quick ReadJul 21, 2025
Policy Brief

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Policy Brief

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Riana Pfefferkorn
Privacy, Safety, SecurityEducation, SkillsQuick ReadJul 21

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Issue Brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Regulation, Policy, GovernancePrivacy, Safety, SecurityQuick ReadJun 30

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.