Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Generative AI is reshaping communications and media and challenging public trust.
Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.
Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.
Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.
Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.
A new study by researchers from Stanford, Carnegie Mellon, and Meta finds that the presence of ads on Facebook doesn’t significantly affect how users value the platform.
A new study by researchers from Stanford, Carnegie Mellon, and Meta finds that the presence of ads on Facebook doesn’t significantly affect how users value the platform.
Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.
Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.
Social Media platforms break traditional barriers of distance and time between people and present unique challenges in calculating the precise value of the transactions and interactions they enable.
Social Media platforms break traditional barriers of distance and time between people and present unique challenges in calculating the precise value of the transactions and interactions they enable.
A Stanford research team shows that building democratic values into a feed-ranking algorithm reduces partisan animosity.
A Stanford research team shows that building democratic values into a feed-ranking algorithm reduces partisan animosity.
The values built into social media algorithms are highly individualized. Could we reshape our feeds to benefit society?
The values built into social media algorithms are highly individualized. Could we reshape our feeds to benefit society?
Stanford researchers discover a rich new data source in the anonymized pathology images and online comments of thousands of pathologists.
Stanford researchers discover a rich new data source in the anonymized pathology images and online comments of thousands of pathologists.
New research shows we can only accurately identify AI writers about 50% of the time. Scholars explain why (and suggest solutions).
New research shows we can only accurately identify AI writers about 50% of the time. Scholars explain why (and suggest solutions).
An analysis of all Facebook posts from U.S. law enforcement agencies revealed widespread overreporting of Black suspects.
An analysis of all Facebook posts from U.S. law enforcement agencies revealed widespread overreporting of Black suspects.
The image-generating model has some impressive capabilities that parallel the brain, but is it really creative?
The image-generating model has some impressive capabilities that parallel the brain, but is it really creative?