Developing Better, Less-Biased Facial Recognition Technology
Kevin Frayer/ Getty Images
This summer the Stanford Institute for Human-Centered Artificial Intelligence hosted technologists, ethicists, and policymakers for a discussion on a growingly problematic technology: facial recognition.
The workshop attempted to cut through the confusion of what it can and can’t do, discuss ways to mitigate its potential for bias, and set a path forward for policymakers and companies to regulate and use the technology in socially responsible ways.
HAI faculty leaders Fei-Fei Li, Daniel E. Ho, and Maneesh Agrawala led the event. Here they explain what led to this workshop, their most important learnings, and the issues policymakers and company leaders should consider. Learn more in the event’s white paper.
How did this workshop come about?
Li: We conceived of the workshop right as Clearview AI was making headlines about its highly accurate facial recognition technology system. Having spent my career in computer vision, I wanted to really provide some perspective about how truly challenging such claims are. While the scope of the workshop was narrow, we included leading computer vision experts, as well as a wide range of other voices from other academic disciplines, government, industry, and civil society.
What were the most interesting takeaways from this workshop?
Ho: It’s hard to distill the diverse perspectives expressed, but I think it is fair to say that many expressed concerns that the current landscape is sort of the “Wild West.” Facial recognition technology is being adopted by banks, airlines, landlords, school principals, and, most controversially, law enforcement, without much guiding the data quality, validation, performance, and potential for serious bias and harm. We saw far more consensus around the problems, than about solutions.
What should policymakers take from this?
Ho: Facial recognition technology is one of the most contentious forms of technology of our age. Much of the debate has rightly surrounded the profound privacy, speech, racial equity, and surveillance concerns, but most proposed legislation of this technology also includes a requirement to test for operational performance. Our paper demonstrates what would be required to actually achieve that, and one perspective is that accuracy alone may disqualify a range of current uses.
What do you hope is the output for industry?
Agrawala: Ultimately, what our paper may call for is a shift from “off-the-shelf” product to a service model. It simply may not be possible to guarantee that facial recognition software is deployed in a fair and accurate fashion without much more investment by a vendor to understand the specific use case. This might mean that many applications in the Wild West of facial recognition technology would cease to exist.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.