Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Building a Social Media Algorithm That Actually Promotes Societal Values | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Building a Social Media Algorithm That Actually Promotes Societal Values

Date
April 08, 2024
Topics
Machine Learning
Communications, Media

A Stanford research team shows that building democratic values into a feed-ranking algorithm reduces partisan animosity.

For all their efforts to moderate content and reduce online toxicity, social media companies still fundamentally care about one thing: retaining users in the long run, a goal they’ve perceived as best achieved by keeping them engaged with content as long as possible. But the goal of keeping individuals engaged doesn’t necessarily serve society at large and can even be harmful to values we hold dear, such as living in a healthy democracy.

To address that problem, a team of Stanford researchers advised by Michael Bernstein, associate professor of computer science in the School of Engineering, and Jeffrey Hancock, professor of communication in the School of Humanities and Sciences, wondered if designers of social media platforms might, in a more principled way, build societal values into their feed-ranking algorithms. Could these algorithms, for example, promote social values such as political participation, mental health, or social connection? The team tested the idea empirically in a new paper that will be published in Proceedings of the ACM on Human-Computer Interaction in April 2024. Bernstein, Hancock, and a group of Stanford HAI faculty also explored that idea in a recent think piece.

For their experiment, the researchers aimed to decrease partisan animosity by building democratic values into a feed-ranking algorithm. “If we can make a dent in this very important value, maybe we can learn how to use social media rankings to affect other values we care about,” says Michelle Lam, a fourth-year graduate student in computer science at Stanford University and co-lead author of the study.

The project, supported by a Stanford HAI Hoffman-Yee Grant, required translating social science concepts about democratic values into algorithmic objectives; creating a feed that implemented the democratic values model; and testing its impact on people’s partisan animosity. The result: The team found lower partisan animosity among people shown a feed that downranked (or removed and replaced) posts expressing highly anti-democratic attitudes.

Read the study, Embedding Democratic Values into Social Media AIs via Societal Objective Functions

 

Moreover, users were just as engaged with the feed optimized for democratic values as they were with an engagement-based feed. “That should be exciting news for industry because it suggests that a feed-ranking algorithm that’s based on societal values won’t compromise users’ engagement,” says Chenyan Jia, assistant professor at Northeastern University, former postdoctoral scholar at Stanford, and co-lead author on the study.

Creating a Societal Objective Function

Many AI systems are trained to optimize for a specific goal known as the objective function. In the case of social media algorithms, the objective function typically optimizes for engagement. But for this project, the research team proposed creating a societal objective function, which required translating democratic values into a model that a computer could optimize. 

Though that task might sound abstract or subjective, Lam says, the team was able to build off of social science work that clearly defines anti-democratic values found to be persistent in surveys and content analysis. Specifically, the researchers used established definitions of eight such values: partisan animosity, support for undemocratic practices, support for partisan violence, support for undemocratic candidates, opposition to bipartisanship, social distrust, social distance, and biased evaluation of politicized facts. 

For each anti-democratic value, the team developed three criteria for determining if the value was present in a social media post. Each post was assigned a rating from 1 to 3 according to how many criteria were met. For example, posts with lower numerical ratings might merely express a partisan viewpoint, while posts with higher ratings were often actively antagonistic toward the other party or amplified negative emotions.

The team then created a 60-post social media feed called PolitiFeed with seven different conditions including: an engagement-based feed; a feed with content warnings; a feed with highly anti-democratic posts downranked; and a feed with anti-democratic posts removed and replaced.

Using a crowdsourcing platform, they tested the impact of these feeds on 1,380 study participants. The result: lower partisan animosity among both Democrats and Republicans who read the downranking feed or the remove-and-replace feed compared with those who read the engagement-based feed.

To scale up their effort, the team next turned to a large language model, GPT-4, to see if it could rate social media posts as effectively as the team had done manually. They took a “zero-shot” approach, meaning that rather than train the AI system with examples, they gave instructions in plain language describing how to rate the eight measures of anti-democratic values on their 3-point scale. The result: GPT-4’s algorithmic rankings were highly correlated with the manual rankings and, perhaps more important, implementing them in the social media feed still yielded reduced partisan animosity. In addition, users found the various feeds equally engaging – suggesting that users will keep clicking even if companies implement societal objective functions. Also, the team found that content warnings backfired in that they raised free speech concerns among conservatives. Notably, the downranking feed and remove-and-replace feed are more effective in reducing animosity among people who are weakly partisan than among those who are strongly partisan.

Future Directions

The team is currently working on a longitudinal and large-scale experiment in a more natural setting – implementing the democratic values model in people’s social media feeds in real time to see if it will have any impact.

“Today’s social media already embeds values, but they’re often defined implicitly,” Lam says. Going forward, the team wants to pursue further empirical work that explicitly implements societal objective functions in the social media context and measures their impact. “We should experiment with different values, such as mental well-being or environmental sustainability – as well as how they trade off against each other,” Lam says. “That’s especially important as we move into different communities that may have different norms and needs.”

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Katharine Miller

Related News

Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty
Nikki Goth Itoi
Jul 01, 2025
News

Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape.

News

Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty

Nikki Goth Itoi
Machine LearningFoundation ModelsEducation, SkillsJul 01

Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape.

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students
Andrew Myers
Jun 06, 2025
News

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

News

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students

Andrew Myers
Machine LearningSciences (Social, Health, Biological, Physical)Jun 06

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

Better Benchmarks for Safety-Critical AI Applications
Nikki Goth Itoi
May 27, 2025
News
Business graph digital concept

Stanford researchers investigate why models often fail in edge-case scenarios.

News
Business graph digital concept

Better Benchmarks for Safety-Critical AI Applications

Nikki Goth Itoi
Machine LearningMay 27

Stanford researchers investigate why models often fail in edge-case scenarios.