Skip to main content Skip to secondary navigation
Page Content
Image
Joy Buolamwini's face reflected in her computer screen.

Joy Buolamwini, one of the central characters in the documentary Coded Bias, exposes bias in facial recognition algorithms and fights to gain attention for the serious harm it can cause. 

Coded Bias, a documentary film directed by Shalini Kantayya, opens with MIT graduate student Joy Buolamwini’s startling discovery that facial recognition software cannot see Black faces such as hers. The filmmaker then follows Buolamwini’s journey to sound the alarm about bias in algorithms, interweaving her progress with interviews of experts familiar with algorithms’ potential to cause harm as well as ordinary people impacted by their use. By the end of the film, Buolamwini has launched an advocacy group called the Algorithmic Justice League and testified before Congress, and the viewer can’t help but cheer her on. 

Coded Bias premiered at the Sundance Film Festival in February. On September 30, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) invited Kantayya for a panel discussion with HAI co-director and computer vision expert Fei-Fei Li and HAI associate director and professor of English Michele Elam. Here, Kantayya answers questions about what she learned from making the film, and her hopes for its impact on viewers.

What did you learn from making Coded Bias?

We tend to give technology this sort of ultimate authority over human decision making. If a person says something and a machine says something, we tend to believe the machine. And what I learned is that oftentimes this sort of big magic that we trust so blindly has not been properly vetted for accuracy, for racial and gender bias, or for the unintended harms it can cause. What is incredible to me is that although these algorithms haven’t gone through regulatory hoops and been approved to be deployed, they are already deciding who gets hired, who gets health care, or how long a prison sentence someone serves. And so I learned that we need legislation in place so that we have a framework to make sure that Big Tech, which yields so much power, encodes our democratic ideals.   

The main protagonist in Coded Bias is Joy Buolamwini, a Black American woman who discovers that facial recognition software can’t see her face. It’s a revelation to her, and to many audience members, I would imagine. Was that your experience?

Yes, for sure. And what’s kind of crazy is that even while making the film, I had the experience of standing next to Joy and having software see my face and not hers. It felt really dehumanizing.

You come to realize more and more that these algorithms are just a reflection of our history. And just as we need conscious checks on our own biases, facial recognition software needs that as well. As human beings, we see more distinction in faces of people of our own race, and our empathy is also tribal. In a civilized society we put checks on that and encourage behaviors that build empathy with people radically different from us. To me, that’s the beauty of film – that it can help us build empathy with people who have radically different experiences than we do. 

Another prominent voice in the film is Cathy O’Neil, author of the book Weapons of Math Destruction. She talks about the asymmetry of power between the tech companies that deploy algorithms and the people impacted by them. What is the solution to that power imbalance and lack of accountability?

Laws. In Coded Bias we explore three different national approaches to data protection. In China, the government has authoritarian and unfettered access to data. In the UK and Europe, the GDPR [General Data Protection Regulation] provides a framework to situate data rights as human rights, though it needs enforcement. And in the States, we live in the wild, wild west. We are home to these tech companies and yet don’t have meaningful regulations. Arguably there are more laws that govern my behavior as an independent filmmaker trying to get broadcast on PBS than govern Facebook where a billion people go for their information and political speech. So we really have to start to crack down and say that when companies like Facebook behave in ways that are not democratic, when they refuse to impose accuracy and truth standards on political speech in the middle of close elections, that is unforgivable and shouldn’t be allowed to happen. There need to be guidelines around truth and transparency and laws that balance the power that Big Tech has.

Paraphrasing from [historian and author] Yuval Harari, whoever owns the data is the most powerful entity and has the most power to impact human destinies. Shifting the power has to start with sharing the knowledge that is now concentrated at places like MIT and Stanford. This film is trying to do that in an entertaining and palpable way, to help people grasp complex concepts. And I hope the film connects the science to why it matters – in the communities it matters most to.

In the film, the sociologist Zeynep Tufekci describes a Facebook experiment that showed two different versions of a get-out-the-vote ad to millions of people. Of those who saw a version that included thumbnails of friends who voted, approximately 300,000 more actually voted. Tufekci concludes that “With a very light touch, Facebook can swing an election.” Do you find this terrifying?

The Facebook experiment is troubling because it disrupts our notion of free will. Increasingly, we are having this invisible hand defining who we are. An algorithm says to us, “I think the next movie you would like is this; the next book in your queue is this; and your search engine suggests you buy this.” It’s curating your past data, who it thinks you are and how much money it assesses that you make, and then makes predictions about you. It’s this invisible hand of power that I’m concerned about. And I’m seeing with the making of this film that even people with the best of intentions can do unintended harm that really impacts peoples’ lives. So it’s more important than ever that we have more voices in the room, we have more checks and balances in place. 

In China, to get access to the internet, citizens have to submit to facial recognition. Here, we use our faces to open our phones. Are we just one step behind China?

Absolutely. Companies like Clearview AI are scraping the internet for our faces. This is our personal biometric data and we have no rights to it. This has to be a place where we draw the line. 

I do think we have to think about how it’s changing our society. China has a social credit score, which is basically algorithmic obedience training where the Communist government sort of gives you a score, and even your friends’ scores can affect yours. But are we in the U.S. doing anything different when we trust someone because they have more Twitter followers or Facebook likes? How are we lending power or credibility to someone? What’s popular isn’t always what’s good. And algorithms are curating what’s popular, but we have to curate more carefully what’s good. 

Do you think the American public will wake up to the problems described in Coded Bias and start to hold tech companies and governments accountable for it? 

I think it’s already happening. There is this sea change where Amazon said in June that they would put a one-year pause on police use of its facial recognition technology, and IBM and Microsoft said they would stop the sale of facial recognition software to police. That’s a sea change that I never thought possible when I began making this film, and it was brought about by two things: One is the badass research and brave women in my film – Joy Buolamwini, Timnit Gebru and Deborah Raji – and others like them who proved that this technology is racially biased. And the other is the people who took to the streets and made the connection between the value of Black lives and the necessity for a moratorium on facial recognition.

Do we need an FDA for tech as Cathy O’Neil says in the film?

I do agree with Cathy O’Neil on that. These technologies are all being deployed so rapidly and could do so much harm on such a massive scale. Facial recognition is the most overt example of how algorithms could erode civil rights in a big way. But algorithms are also propagating bias in ways that we don’t see and affecting things that we’ve fought for like fair housing and fair hiring, and other things we value in a civilized society. We need to understand these algorithms and rein them in. 

Having an FDA doesn’t mean I don’t like food. It means I want to make sure the food is safe, that it doesn’t hurt people unintentionally, that we have certain standards of quality that my country is setting for my health and safety and that we can count on as citizens in a democracy. We shouldn’t have to check the terms and conditions of these tech platforms to make sure they don’t violate our civil rights, especially given that we’re increasingly required to use them to participate in society. An FDA for tech could take care of that for us. 

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content