How to Get More Truth from Social Media | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

How to Get More Truth from Social Media

Date
February 04, 2021
Topics
Design, Human-Computer Interaction
Machine Learning
Communications, Media
iStock/VectorFun

Sociologist and former journalist Mutale Nkonde warns that the AI behind much of today’s social media is inherently biased — but it’s not too late to do something about it.

The old maxim holds that a lie spreads much faster than a truth, but it has taken the global reach and lightning speed of social media to lay it bare before the world.

One problem of the age of misinformation, says sociologist and former journalist Mutale Nkonde, a fellow at the Stanford Center on Philanthropy and Civil Society (PACS), is that the artificial intelligence algorithms used to profile users and disseminate information to them, whether truthful or not, are inherently biased against minority groups, because they are underrepresented in the historical data upon which the algorithms are based.

Now, Nkonde and others like her are holding social media’s feet to the fire, so to speak, to get them to root out bias from their algorithms. One approach she promotes is the Algorithmic Accountability Act, which would authorize the Federal Trade Commission (FTC) to create regulations requiring companies under its jurisdiction to assess the impact of new and existing automated decision systems. Another approach she has favored is called “Strategic Silence,” which seeks to deny untruthful users and groups the media exposure that amplifies their false claims and helps them attract new adherents.

Nkonde explores the hidden biases of the age of misinformation in this episode of Stanford Engineering’s The Future of Everything podcast, hosted by bioengineer Russ Altman, associate director of the Stanford Institute for Human-Centered Artificial Intelligence. Listen and subscribe here, or watch below.

 

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

 

iStock/VectorFun
Share
Link copied to clipboard!
Contributor(s)
Stanford Engineering Staff
Related
  • Through AI and Text Analysis, Social Media Shows Our Community Well-being
    Melissa de Witte
    Apr 27
    news

    Stanford HAI junior fellow Johannes Eichstaedt built an algorithm that can provide, in principle, a real-time indication of community health.

Related News

Stanford Study: AI Experts Are Optimistic About AI. The Rest of Us… Not So Much
KQED
Apr 13, 2026
Media Mention

Sha Sajadieh, AI Index Lead, comments on HAI's 2026 AI Index findings.

Media Mention
Your browser does not support the video tag.

Stanford Study: AI Experts Are Optimistic About AI. The Rest of Us… Not So Much

KQED
Workforce, LaborSciences (Social, Health, Biological, Physical)Design, Human-Computer InteractionEthics, Equity, InclusionApr 13

Sha Sajadieh, AI Index Lead, comments on HAI's 2026 AI Index findings.

Want To Understand The Current State Of AI? Check Out These Charts.
MIT Technology Review
Apr 13, 2026
Media Mention

"If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise."

Media Mention
Your browser does not support the video tag.

Want To Understand The Current State Of AI? Check Out These Charts.

MIT Technology Review
International Affairs, International Security, International DevelopmentEducation, SkillsRegulation, Policy, GovernanceMachine LearningWorkforce, LaborApr 13

"If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise."

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

News

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform

Dylan Walsh
Computer VisionHealthcareSciences (Social, Health, Biological, Physical)Machine LearningMar 03

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.