

Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.
Artificial intelligence applications are frequently used without any mechanism for external testing or evaluation. Simultaneously, many AI systems present black-box decision-making challenges. Modern machine learning systems are opaque to outside stakeholders, including researchers, who can only probe the system by providing inputs and measuring outputs. Researchers, users, and regulators alike are thus forced to grapple with using, being impacted by, or regulating algorithms they cannot fully observe. This brief reviews the history of algorithm auditing, describes its current state, and offers best practices for conducting algorithm audits today.