Skip to main content Skip to secondary navigation
Page Content

Some researchers argue that even unintended biases of the people who create artificial intelligence can influence the outcome of those applications.

The people who create and fund the research and development of artificial intelligence today are not yet reflective of the global population that will live with AI, which can have dangerous results for those not represented. 

In 2016, fewer than 15% of tech positions were held by black or hispanic people. Women comprise just 25% of computing roles and only 12% of machine learning researchers. Just 3% of venture capitalists are black; a mere 1% are Latinx.

At the Stanford Institute for Human-Centered Artificial Intelligence’s (HAI) 2019 Fall Conference on Ethics, Policy and Governance in late October, multiple speakers discussed some of the ways people from underrepresented groups are being impacted by biased data in AI.

Recognizing the Coded Gaze 

Joy Buolamwini, a research assistant at MIT’s Media Lab and founder of the Algorithmic Justice League, took the stage at HAI’s conference to explain how biometric technology can “amplify inequalities” and “be a reflection of systemic inequality.” She opened her talk by telling the audience about her own experience of being invisible to a facial recognition program unless she wore a white plastic mask. 

Buolamwini’s research showed multiple instances in which facial recognition software applications used by large tech companies struggle to identify women and people with dark skin tones. When she studied the facial recognition software made by American companies IBM and Microsoft, and Chinese company Face++, she saw their aggregate accuracy rates were 88%, 94% and 88%, respectively. However those accuracy rates were not the same for all types of faces. Buolamwini discovered that 99% of the time, white men were identified correctly, but as skin tones darkened, the accuracy of those systems decreased. Intersectional groups, such as women of color, fared the worst, with error rates of up to 35%. “Aggregate statistics mask important differences,” said Buolamwini. “No one subgroup can represent all of humanity.” 

In her work, Buolamwini found that images of Michelle Obama and Oprah Winfrey — two well-known, oft-photographed black women — were frequently misidentified by AI. Serena Williams, another globally-recognized black woman, was incorrectly identified as male 76% of the time. 

Buolamwini described how systems based on “supremely white data” will result in a “poor representation of the undersampled majority,” and “pale male perfection.” She calls this “The Coded Gaze” — when visual AI systems reflect “the priorities, preferences, and sometimes the prejudices, intended or otherwise, of those who have the power to shape technology.”

Wendy Chun, Research Chair in New Media at Simon Fraser University in Canada, is doing more work on the subject. Her latest project, “Discriminating Data,” explores how algorithms “encode legacies of segregation, eugenics and multiculturalism.” She is studying the ways in which “identity categories such as race, gender and class mutate and persist through algorithms that are allegedly blind” and that even when we try our best to avoid perpetuating such problems, “ignoring differences amplifies discrimination.”

The Danger of Being Seen Incorrectly

When AI is trusted to run surveillance, criminal justice and healthcare applications, hidden bias can increase the risk of negative impacts to underrepresented groups, such as women and people of color. 

Modern surveillance tools and other algorithmic-based systems give governments and institutions unprecedented information and control over individuals and groups of people, yet machine intelligence-based facial recognition systems often struggle with accuracy. Globally, 75 countries are actively using AI technologies for surveillance purposes. In the U.S., some 130 million adults have already been indexed in facial recognition networks, some of which are not audited and include false-positive match rates of 90% or more. A system in South Wales, UK, wrongly identified 91% of automated facial recognition matches. There, the biometric data of 2,451 innocent people were captured and stored without their knowledge. Private systems, used in conjunction with law enforcement, have sparked claims of miscarriages of justice. 

For people of color, AI-powered healthcare applications can also be problematic. One study revealed that an algorithm used for as many as 70 million U.S. patients prioritized medical care for white patients over black patients. 

Reid Hoffman, HAI advisory council member, co-founder of LinkedIn and Partner at venture capital firm Greylock Partners, suggested AI creators examine the data sets that power AI applications. “The thing about AI is… if you have racial bias or other kinds of bias, you’re institutionalizing them, you’re locking them in, in a much more serious way that’s scalable and potentially not trackable,” he said. Hoffman recommended that engineers work with ethicists and humanists to determine “which questions need to be asked for us to have a sense of belief that we are moving in the right direction for having more justice in society for diverse groups.”


Intersectional AI

On a panel at the HAI conference entitled “Race, Rights and Facial Recognition,” Matt Cagle, Technology and Civil Liberties Attorney for ACLU of Northern California, warned against the dangers of combining off-the-shelf facial recognition technology with public data sets and body cameras worn by police officers. “That could allow for the establishment of more eyes watching us as we go down the street,” Cagle said. He also pointed out that the risk of such surveillance goes up when the cost of facial recognition and video recording technology goes down, and warned that “inaccurate systems will result in false arrests and the wrongful deaths of people in our society and communities.”

On day two of HAI’s conference, Ge Wang, Stanford Department of Music Associate Professor, and Stephanie Dinkins, HAI’s 2019 Artist in Residence, talked about their work and demonstrated examples of how multicultural, multidisciplinary approaches to AI can lead to new perspectives. 

Wang suggested we look at “each situation where we might apply AI” to see if “ there might be a balance that could be discovered between automation and meaningful human interaction.” He suggested we start by expanding the circle of people in the conversation to include more people from different disciplines.

Dinkins, a transmedia artist who creates platforms for dialog about AI as it intersects race, gender, aging, and our future histories, shared video and audio clips of her most recent work: N’TOO (Not The Only One), an AI storyteller designed to relay a family memoir and hold conversations with people who interact with it in art exhibits. 

N’TOO was inspired by what Dinkins says is “one of our most basic wants” — a desire to connect with our ancestors and interrogate the past. The evolving intellect of N’TOO relies on information drawn from the lived experiences of Black women in Dinkins’ family, spanning three generations from the Great Migration, when six million African American people fleeing Jim Crow moved north and west from the rural south beginning in 1916, to the present. 

Dinkins is on a mission to work with communities of color to co-create inclusive, fair and ethical AI ecosystems. Dinkins fed N’TOO a data diet of academic papers, books, TV shows, podcasts and radio programming consumed by her family members. N’TOO “has our ethos and our values within it, but it is growing into its own stories, too,” she says. “There’s got to be ways that we take community, culture, and values and place them into AI systems — and have them come out in a way that feels sovereign and supportive of the communities that they come from.”

Dinkins offers her project as evidence of the potential for artists and other creatives to “offer the thinking around humanity and what it means to be human and what we need in a broad and maybe even a more complex way,” she says.

More News Topics