Drew Kelly
Artificial intelligence offers up the vision of a better society for us all, but despite years of effort, the field still hasn’t embraced the diversity it needs to deliver on those promises – and lacks the data critical to understanding and fixing the problem.
Those are among the takeaways from the 2021 AI Index, an independent project led by Stanford’s Institute for Human-Centered Artificial Intelligence (HAI). Each year the index tracks and organizes global data relating to artificial intelligence that’s used by policymakers, researchers, executives, journalists, and the general public in trying to make sense of this complex field.
An Unmoving Needle
In a field known for rapid innovation, little has changed in the makeup of the AI workforce in academia or industry over the past decade, according to the index. AI remains predominantly male and lacking in diversity when it comes to race, ethnicity, sexual orientation, and gender identity.
Some findings:
- Women account for less than 19%, on average, of all AI and computer science (CS) PhD graduates in North America over the past 10 years.
- Women make up only 16% of all tenure-track CS faculty, according to 17 of the 18 universities participating in the survey.
- In 2019, 45% of new U.S.-resident AI PhD graduates were white, while 2.4% were African American and 3.2% were Hispanic.
- A membership survey by Queer in AI in 2020 shows almost half of respondents view the lack of inclusiveness in the field as an obstacle. More than 40% of members surveyed report having experienced discrimination or harassment as a queer person at school or work.
“It’s not necessarily surprising that the needle hasn’t moved, unfortunately, because that’s been the case for several years,” says Terah Lyons, a 2021 AI Index steering committee member and founding executive director of the Partnership on AI, a nonprofit initiative consisting of over 100 global organizations working to promote responsible AI development and deployment. “It’s clear that more attention needs to be paid, and things need to be done differently. In addition to other forms of evidence we have that representation in the field is not improving, these data serve as yet another call to action.”
Step One: Show the Data
AI’s diversity problem can’t be resolved without first addressing the scarcity of available data that could shed light on the issue, says Lyons.
“It’s really hard to source primary data from organizations that are fostering workforces, student bodies, and faculty communities, and that really need to be arm-in-arm on the front lines of this issue,” she says. “You can’t be accountable to changing a problem that you don’t really understand the dimensions of, and I think that’s exactly what we’re experiencing in regard to equity in the AI field right now.”
There are plenty of reasons for that lack of data; many organizations, if they collect it at all, hesitate to share it because it often reflects poorly or shows little progress. There’s no central clearinghouse for this type of information, and it’s only recently that information on AI began to be collected separately from the general field of computer science. Improving the situation without data is a near-impossible task, says Barbara Grosz, an index steering committee member and the Higgins Research Professor of Natural Sciences at Harvard University.
“Without data, universities, research funders, scientific societies, and industry can’t identify the actual problems well enough to know what needs fixing,” she says. “Experience shows that many good ideas on how to improve diversity fail to have the expected impact. It’s like an experimental science; you have to be able to try things, see if they work, adjust them and try again. And without data, you don’t know how to do that.”
Some Diversity Success
The index does reflect some positive trends. In 2020, women led men in gaining AI skills in India, South Korea, Singapore, and Australia. Groups including Women in Machine Learning, Black in AI, and Queer in AI are working successfully to boost their members’ participation in the field; as of 2020, the number of Black technologists attending major AI conferences globally has increased 40-fold.
Other groups – such as Stanford AI4ALL – reach out to underrepresented students in middle and high school to improve the tech talent pipeline. Yet others work at the university level, with some efforts focused on faculty recruitment and retention. There are also ongoing efforts to support diversity in industry.
“But the numbers are just not moving enough,” Lyons says.
What Will It Take?
Diversifying the AI workforce is critical to preventing the narrow perspectives and unintended biases that can taint the development and use of AI systems that are becoming ubiquitous in fields ranging from finance and health care to law enforcement and the judicial system. Success will depend on vigilance, mentors who understand how to support those who look different than themselves, and – perhaps most important – committed leaders who see diversity as a business imperative rather than an afterthought, Grosz says. Organizations that succeed, she adds, will attract and retain the best people, produce the best solutions and products, and perform an important social service.
“AI systems need to work for everyone in society; that’s just an ethical value that’s important,” Grosz says. “They need to work for the disabled, for people of color, for people who come from other cultures, and for women as well as men. We know what we’ve experienced, but we don’t know many things that people from other groups experience, so you need to have all these people in the room. They need to be part of the design and part of the thinking.”
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.