Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
How to Get More Truth from Social Media | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

How to Get More Truth from Social Media

Date
February 04, 2021
Topics
Design, Human-Computer Interaction
Machine Learning
Communications, Media
iStock/VectorFun

Sociologist and former journalist Mutale Nkonde warns that the AI behind much of today’s social media is inherently biased — but it’s not too late to do something about it.

The old maxim holds that a lie spreads much faster than a truth, but it has taken the global reach and lightning speed of social media to lay it bare before the world.

One problem of the age of misinformation, says sociologist and former journalist Mutale Nkonde, a fellow at the Stanford Center on Philanthropy and Civil Society (PACS), is that the artificial intelligence algorithms used to profile users and disseminate information to them, whether truthful or not, are inherently biased against minority groups, because they are underrepresented in the historical data upon which the algorithms are based.

Now, Nkonde and others like her are holding social media’s feet to the fire, so to speak, to get them to root out bias from their algorithms. One approach she promotes is the Algorithmic Accountability Act, which would authorize the Federal Trade Commission (FTC) to create regulations requiring companies under its jurisdiction to assess the impact of new and existing automated decision systems. Another approach she has favored is called “Strategic Silence,” which seeks to deny untruthful users and groups the media exposure that amplifies their false claims and helps them attract new adherents.

Nkonde explores the hidden biases of the age of misinformation in this episode of Stanford Engineering’s The Future of Everything podcast, hosted by bioengineer Russ Altman, associate director of the Stanford Institute for Human-Centered Artificial Intelligence. Listen and subscribe here, or watch below.

 

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

 

iStock/VectorFun
Share
Link copied to clipboard!
Contributor(s)
Stanford Engineering Staff
Related
  • Through AI and Text Analysis, Social Media Shows Our Community Well-being
    Melissa de Witte
    Apr 27
    news

    Stanford HAI junior fellow Johannes Eichstaedt built an algorithm that can provide, in principle, a real-time indication of community health.

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

How AI Shook The World In 2025 And What Comes Next
CNN Business
Dec 30, 2025
Media Mention

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Media Mention
Your browser does not support the video tag.

How AI Shook The World In 2025 And What Comes Next

CNN Business
Industry, InnovationHuman ReasoningEnergy, EnvironmentDesign, Human-Computer InteractionGenerative AIWorkforce, LaborEconomy, MarketsDec 30

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.