Algorithms and the Perceived Legitimacy of Content Moderation

This brief explores people’s views of Facebook’s content moderation processes, providing a pathway for better online speech platforms and improving content moderation processes.
Key Takeaways
Policymakers play an important role in shaping the future of online speech and content moderation, but so does the public. Understanding people’s perceptions of content moderation legitimacy—such as concerns about algorithmic fairness and individual moderator’s political and personal biases—is essential to designing better online platforms and improving online content moderation.
We conducted a survey on people’s views of Facebook’s content moderation processes and found that participants perceive expert panels as a more legitimate content moderation process than paid contractors, algorithmic decision-making, or digital juries.
Responses from participants also showed a clear distinction between impartiality and perceived legitimacy of moderation processes. Although participants considered algorithms the most impartial process, algorithms had lower perceived legitimacy than expert panels.
Executive Summary
Social media platforms are no strangers to criticism, especially with respect to their content moderation policies. More speech is taking place online. Simultaneously, online and offline speech are becoming increasingly entangled. As these trends continue, social media platforms' content moderation policies become ever more important. How companies design their algorithms and determine what speech should and should not be removed are just some of the decisions that impact users, the platform, and how the platform is perceived by policymakers and the public.
The public perception of legitimacy is important. Research in other fields underscores that institutions often depend, at least partly, on people accepting their authority. For courts, whether or not citizens buy into a court's ruling can impact how many people will respect the law, adhere to it, and trust the court system to operate effectively. The same idea of legitimacy applies to social media companies. If their content moderation processes—from human reviews to algorithmic flagging—are not perceived as legitimate, it will impact how users and policymakers view and engage with the platform. It can also shape whether users believe they must follow platform rules.
In our paper, "Comparing the Perceived Legitimacy of Content Moderation Processes," we dive into this problem by surveying people's views of Facebook's content moderation processes. We presented U.S. Facebook users with content moderation decisions and randomized the description of whether paid contractors, algorithms, expert panels, or juries of users made those decisions. Their responses, given this information, provide a window into how individuals perceive the legitimacy of moderation decisions.
We also studied whether the decision itself—and respondents’ agreement with it—shaped their answers. The more social media companies’ content moderation policies shape popular discourse, and the more algorithms play a role in that moderation, the more essential it is to understand how to make those content moderation processes as legitimate as possible.







