Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Developing Better, Less-Biased Facial Recognition Technology | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Developing Better, Less-Biased Facial Recognition Technology

Date
November 09, 2020
Topics
Machine Learning
Kevin Frayer/ Getty Images

Researchers discuss possible ways to regulate, improve this highly contentious technology during a HAI workshop.

This summer the Stanford Institute for Human-Centered Artificial Intelligence hosted technologists, ethicists, and policymakers for a discussion on a growingly problematic technology: facial recognition.

The workshop attempted to cut through the confusion of what it can and can’t do, discuss ways to mitigate its potential for bias, and set a path forward for policymakers and companies to regulate and use the technology in socially responsible ways.

HAI faculty leaders Fei-Fei Li, Daniel E. Ho, and Maneesh Agrawala led the event. Here they explain what led to this workshop, their most important learnings, and the issues policymakers and company leaders should consider. Learn more in the event’s white paper.

How did this workshop come about? 

Li: We conceived of the workshop right as Clearview AI was making headlines about its highly accurate facial recognition technology system. Having spent my career in computer vision, I wanted to really provide some perspective about how truly challenging such claims are. While the scope of the workshop was narrow, we included leading computer vision experts, as well as a wide range of other voices from other academic disciplines, government, industry, and civil society.

What were the most interesting takeaways from this workshop?

Ho: It’s hard to distill the diverse perspectives expressed, but I think it is fair to say that many expressed concerns that the current landscape is sort of the “Wild West.” Facial recognition technology is being adopted by banks, airlines, landlords, school principals, and, most controversially, law enforcement, without much guiding the data quality, validation, performance, and potential for serious bias and harm. We saw far more consensus around the problems, than about solutions.

What should policymakers take from this?

Ho: Facial recognition technology is one of the most contentious forms of technology of our age. Much of the debate has rightly surrounded the profound privacy, speech, racial equity, and surveillance concerns, but most proposed legislation of this technology also includes a requirement to test for operational performance. Our paper demonstrates what would be required to actually achieve that, and one perspective is that accuracy alone may disqualify a range of current uses.

What do you hope is the output for industry? 

Agrawala: Ultimately, what our paper may call for is a shift from “off-the-shelf” product to a service model. It simply may not be possible to guarantee that facial recognition software is deployed in a fair and accurate fashion without much more investment by a vendor to understand the specific use case.  This might mean that many applications in the Wild West of facial recognition technology would cease to exist.

Read the full white paper.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Kevin Frayer/ Getty Images
Share
Link copied to clipboard!
Contributor(s)
HAI staff
Related
  • Coded Bias: Director Shalini Kantayya on Solving Facial Recognition’s Serious Flaws
    Katharine Miller
    Sep 14
    news

    We need ‘guidelines around transparency and laws that balance Big Tech’s power.’

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Spatial Intelligence Is AI’s Next Frontier
TIME
Dec 11, 2025
Media Mention

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Media Mention
Your browser does not support the video tag.

Spatial Intelligence Is AI’s Next Frontier

TIME
Computer VisionMachine LearningGenerative AIDec 11

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.