Skip to main content Skip to secondary navigation
Page Content

HAI Policy Briefs

December 2022

Algorithms and the Perceived Legitimacy of Content Moderation

The perceived legitimacy of content moderation processes is an important question for policymakers—and it informs how policymakers themselves think about social media. If their content moderation processes—from human reviews to algorithmic flagging—are not perceived as legitimate, it will impact how users view and engage with the platform. It can also shape whether users believe they must follow platform rules. In this brief, scholars dive into this problem by surveying people’s views of Facebook’s content moderation processes, providing a pathway for better online speech platforms and improving content moderation processes.

Key Takeaways

Policy Brief

➜ Policymakers play an important role in shaping the future of online speech and content moderation, but so does the public. Understanding people’s perceptions of content moderation legitimacy—such as concerns about algorithmic fairness and individual moderator’s political and personal biases—is essential to designing better online platforms and improving online content moderation.

➜ We conducted a survey on people’s views of Facebook’s content moderation processes and found that participants perceive expert panels as a more legitimate content moderation process than paid contractors, algorithmic decision-making, or digital juries.

➜ Responses from participants also showed a clear distinction between impartiality and perceived legitimacy of moderation processes. Although participants considered algorithms the most impartial process, algorithms had lower perceived legitimacy than expert panels.

Read the full brief 

Authors