Skip to main content Skip to secondary navigation
Page Content

Building a Social Media Algorithm That Actually Promotes Societal Values

A Stanford research team shows that building democratic values into a feed-ranking algorithm reduces partisan animosity.

Image
illustration of hands holding cell phones with social media hearts emerging

For all their efforts to moderate content and reduce online toxicity, social media companies still fundamentally care about one thing: retaining users in the long run, a goal they’ve perceived as best achieved by keeping them engaged with content as long as possible. But the goal of keeping individuals engaged doesn’t necessarily serve society at large and can even be harmful to values we hold dear, such as living in a healthy democracy.

To address that problem, a team of Stanford researchers advised by Michael Bernstein, associate professor of computer science in the School of Engineering, and Jeffrey Hancock, professor of communication in the School of Humanities and Sciences, wondered if designers of social media platforms might, in a more principled way, build societal values into their feed-ranking algorithms. Could these algorithms, for example, promote social values such as political participation, mental health, or social connection? The team tested the idea empirically in a new paper that will be published in Proceedings of the ACM on Human-Computer Interaction in April 2024. Bernstein, Hancock, and a group of Stanford HAI faculty also explored that idea in a recent think piece.

For their experiment, the researchers aimed to decrease partisan animosity by building democratic values into a feed-ranking algorithm. “If we can make a dent in this very important value, maybe we can learn how to use social media rankings to affect other values we care about,” says Michelle Lam, a fourth-year graduate student in computer science at Stanford University and co-lead author of the study.

The project, supported by a Stanford HAI Hoffman-Yee Grant, required translating social science concepts about democratic values into algorithmic objectives; creating a feed that implemented the democratic values model; and testing its impact on people’s partisan animosity. The result: The team found lower partisan animosity among people shown a feed that downranked (or removed and replaced) posts expressing highly anti-democratic attitudes.

Read the study, Embedding Democratic Values into Social Media AIs via Societal Objective Functions

 

Moreover, users were just as engaged with the feed optimized for democratic values as they were with an engagement-based feed. “That should be exciting news for industry because it suggests that a feed-ranking algorithm that’s based on societal values won’t compromise users’ engagement,” says Chenyan Jia, assistant professor at Northeastern University, former postdoctoral scholar at Stanford, and co-lead author on the study.

Creating a Societal Objective Function

Many AI systems are trained to optimize for a specific goal known as the objective function. In the case of social media algorithms, the objective function typically optimizes for engagement. But for this project, the research team proposed creating a societal objective function, which required translating democratic values into a model that a computer could optimize. 

Though that task might sound abstract or subjective, Lam says, the team was able to build off of social science work that clearly defines anti-democratic values found to be persistent in surveys and content analysis. Specifically, the researchers used established definitions of eight such values: partisan animosity, support for undemocratic practices, support for partisan violence, support for undemocratic candidates, opposition to bipartisanship, social distrust, social distance, and biased evaluation of politicized facts. 

For each anti-democratic value, the team developed three criteria for determining if the value was present in a social media post. Each post was assigned a rating from 1 to 3 according to how many criteria were met. For example, posts with lower numerical ratings might merely express a partisan viewpoint, while posts with higher ratings were often actively antagonistic toward the other party or amplified negative emotions.

The team then created a 60-post social media feed called PolitiFeed with seven different conditions including: an engagement-based feed; a feed with content warnings; a feed with highly anti-democratic posts downranked; and a feed with anti-democratic posts removed and replaced.

Using a crowdsourcing platform, they tested the impact of these feeds on 1,380 study participants. The result: lower partisan animosity among both Democrats and Republicans who read the downranking feed or the remove-and-replace feed compared with those who read the engagement-based feed.

To scale up their effort, the team next turned to a large language model, GPT-4, to see if it could rate social media posts as effectively as the team had done manually. They took a “zero-shot” approach, meaning that rather than train the AI system with examples, they gave instructions in plain language describing how to rate the eight measures of anti-democratic values on their 3-point scale. The result: GPT-4’s algorithmic rankings were highly correlated with the manual rankings and, perhaps more important, implementing them in the social media feed still yielded reduced partisan animosity. In addition, users found the various feeds equally engaging – suggesting that users will keep clicking even if companies implement societal objective functions. Also, the team found that content warnings backfired in that they raised free speech concerns among conservatives. Notably, the downranking feed and remove-and-replace feed are more effective in reducing animosity among people who are weakly partisan than among those who are strongly partisan.

Future Directions

The team is currently working on a longitudinal and large-scale experiment in a more natural setting – implementing the democratic values model in people’s social media feeds in real time to see if it will have any impact.

“Today’s social media already embeds values, but they’re often defined implicitly,” Lam says. Going forward, the team wants to pursue further empirical work that explicitly implements societal objective functions in the social media context and measures their impact. “We should experiment with different values, such as mental well-being or environmental sustainability – as well as how they trade off against each other,” Lam says. “That’s especially important as we move into different communities that may have different norms and needs.”

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics