The Data Behind Your Doom Scroll: How Negative News Takes Over Your Feed
iStock/PeopleImages
Mired in a pandemic lockdown and left doom-scrolling through social media, one neuroscientist at Stanford University looked at his Twitter feed and wondered what makes a story go viral. Intrigued, he then did what scientists do; he got together with his colleagues and designed a study. Gathering nearly 30 million posts from X (formerly Twitter) tweeted by over 180 news organizations across the political spectrum between 2011 and 2020, they coded the messages with a computational tool called “sentiment analysis.”
“Sentiment analysis uses simple algorithms and coded dictionaries to evaluate the emotional tone of a post. Is the message positive in nature or negative? Is it dispassionate or intended to stoke emotions like anger, fear, anxiety?” said lead author Brian Knutson, a professor of psychology and neuroscience at Stanford School of Humanities and Sciences and an expert in the psychology of decision-making. “We then mapped sentiment against virality to learn what was driving America’s social media tendencies.”
The study, published in the journal PLoS ONE and supported by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), provides a stark understanding of the sorts of posts that spread more widely and rapidly and why. “Typically, it’s the negative, highly arousing stories that get the most traffic, and these stories tend to come from the most biased sources, on both the left and the right,” Knutson said.
Sentiment Matters
His analysis revealed that news sources posted almost twice as much negative content as positive content overall. This pattern contrasts with individual users, who tend to post more positive than negative content. The team also found that the most biased news sources, left or right, had roughly 12% more high-arousal negative content than balanced news sources and that these highly arousing negative posts were most likely to go viral. Most troubling of all, however, the researchers found that this increasing spread of high-arousal negative content also grew among the balanced news sources over time, a trend Knutson attributes to their following biased news sources in chasing engagement metrics.
Read the full study, News Source Bias and Sentiment on Social Media
While the study establishes a valuable baseline understanding, Knutson’s eventual goal is to explore ways to reverse the trend or lessen the damaging effects of negative content and misinformation in modern American life. Future studies will examine tools and techniques, both computational and on the policy front, that could change the dynamic.
The stakes are high. Over half of all U.S. adults consume news online, the study points out, and most of it is shared via social media platforms like X, TikTok, and Facebook. Social media also provide near instantaneous access and dissemination powers, with few-to-no checks on false and misleading information.
Complicating matters is the fact that in a world where news is paid for by advertising and advertising is driven by engagement metrics (like hits, likes, time on page, and reposts), even balanced news sources might be incentivized to amplify negative emotional content in order to chase eyeballs, Knutson said.
“All news sources want their content to go viral, but biased news sources seem more willing to engage users with emotionally charged content, especially as political polarization increases,” he said.
This can drive further polarization and harm users' ability to make well-informed decisions, and it might also decrease users’ well-being while increasing political division. “It’s a sort of ‘affective pollution’ that exacerbates social strife,” Knutson added.
Possible Interventions
The research highlights the need for interventions to limit the spread of harmful emotional content and suggests that social media algorithms could be redesigned to reduce the amplification of emotionally charged and potentially biased news.
Knutson said the same sort of sentiment analysis tools could be implemented by users to filter out strongly negative and misleading content from their own feeds, but that puts the onus on the user and cannot prevent willful consumption of bad information. Policy approaches and regulation are also possible options, but many platforms seem unwilling or unable to moderate their content.
“Our study shows that this negative sentiment dynamic is real and can be harmful, but exactly what to do about it remains an open challenge,” Knutson concluded. “Perhaps by filtering content sentiment, along with the semantics and source, we can provide users with a new and useful set of tools.”