Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
The Data Behind Your Doom Scroll: How Negative News Takes Over Your Feed | Stanford HAI
news

The Data Behind Your Doom Scroll: How Negative News Takes Over Your Feed

Date
October 25, 2024
Topics
Natural Language Processing
Machine Learning

Analyzing nearly 30 million posts, Stanford scholars reveal how emotional, negative content fuels the viral spread of news on social media. Now, what to do about it?

Mired in a pandemic lockdown and left doom-scrolling through social media, one neuroscientist at Stanford University looked at his Twitter feed and wondered what makes a story go viral. Intrigued, he then did what scientists do; he got together with his colleagues and designed a study. Gathering nearly 30 million posts from X (formerly Twitter) tweeted by over 180 news organizations across the political spectrum between 2011 and 2020, they coded the messages with a computational tool called “sentiment analysis.”

“Sentiment analysis uses simple algorithms and coded dictionaries to evaluate the emotional tone of a post. Is the message positive in nature or negative? Is it dispassionate or intended to stoke emotions like anger, fear, anxiety?” said lead author Brian Knutson, a professor of psychology and neuroscience at Stanford School of Humanities and Sciences and an expert in the psychology of decision-making. “We then mapped sentiment against virality to learn what was driving America’s social media tendencies.”

The study, published in the journal PLoS ONE and supported by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), provides a stark understanding of the sorts of posts that spread more widely and rapidly and why. “Typically, it’s the negative, highly arousing stories that get the most traffic, and these stories tend to come from the most biased sources, on both the left and the right,” Knutson said. 

Sentiment Matters

His analysis revealed that news sources posted almost twice as much negative content as positive content overall. This pattern contrasts with individual users, who tend to post more positive than negative content. The team also found that the most biased news sources, left or right, had roughly 12% more high-arousal negative content than balanced news sources and that these highly arousing negative posts were most likely to go viral. Most troubling of all, however, the researchers found that this increasing spread of high-arousal negative content also grew among the balanced news sources over time, a trend Knutson attributes to their following biased news sources in chasing engagement metrics. 

Read the full study, News Source Bias and Sentiment on Social Media

While the study establishes a valuable baseline understanding, Knutson’s eventual goal is to explore ways to reverse the trend or lessen the damaging effects of negative content and misinformation in modern American life. Future studies will examine tools and techniques, both computational and on the policy front, that could change the dynamic.

The stakes are high. Over half of all U.S. adults consume news online, the study points out, and most of it is shared via social media platforms like X, TikTok, and Facebook. Social media also provide near instantaneous access and dissemination powers, with few-to-no checks on false and misleading information.

Complicating matters is the fact that in a world where news is paid for by advertising and advertising is driven by engagement metrics (like hits, likes, time on page, and reposts), even balanced news sources might be incentivized to amplify negative emotional content in order to chase eyeballs, Knutson said. 

“All news sources want their content to go viral, but biased news sources seem more willing to engage users with emotionally charged content, especially as political polarization increases,” he said.

This can drive further polarization and harm users' ability to make well-informed decisions, and it might also decrease users’ well-being while increasing political division. “It’s a sort of ‘affective pollution’ that exacerbates social strife,” Knutson added.

Possible Interventions

The research highlights the need for interventions to limit the spread of harmful emotional content and suggests that social media algorithms could be redesigned to reduce the amplification of emotionally charged and potentially biased news. 

Knutson said the same sort of sentiment analysis tools could be implemented by users to filter out strongly negative and misleading content from their own feeds, but that puts the onus on the user and cannot prevent willful consumption of bad information. Policy approaches and regulation are also possible options, but many platforms seem unwilling or unable to moderate their content.

“Our study shows that this negative sentiment dynamic is real and can be harmful, but exactly what to do about it remains an open challenge,” Knutson concluded. “Perhaps by filtering content sentiment, along with the semantics and source, we can provide users with a new and useful set of tools.”

Share
Link copied to clipboard!
Contributor(s)
Andrew Myers

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Spatial Intelligence Is AI’s Next Frontier
TIME
Dec 11, 2025
Media Mention

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Media Mention
Your browser does not support the video tag.

Spatial Intelligence Is AI’s Next Frontier

TIME
Computer VisionMachine LearningGenerative AIDec 11

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs