Skip to main content Skip to secondary navigation
Page Content

Londa Schiebinger: Inclusive Design Will Help Create AI That Works For Everyone

The international expert on gender in science and technology discusses tools she developed to help professionals build more inclusive AI.

Image
A robot assistant at the MTR Tuen Ma Line To Kwa Wan Station in Kowloon, Hong Kong

When we design assistant robots, embedding gender cues can improve how well humans listen to them. But those cues can also enforce gender stereotypes. | iStock/winhorse

A few years ago, a New Jersey man was arrested for shoplifting and spent 10 days in jail. He was actually 30 miles away during the time of the incident; police facial recognition software wrongfully identified him.

Facial recognition’s race and gender failings are well known. Often trained on data sets of primarily white men, the technology fails to recognize other demographics as accurately. This is only one example of design that excludes certain demographics. Consider virtual assistants that don't understand local dialects, robotic humanoids that reinforce gender stereotypes, or medical tools that don’t work as well on darker skin tones.

Londa Schiebinger, the John L. Hinds Professor in the History of Science at Stanford University, is the founding director of Gendered Innovations in Science, Health & Medicine, Engineering, and Environment and is part of the teaching team for Innovations in Inclusive Design, a course offered by Stanford’s d.school. The course asks students to analyze technologies for inclusivity, consider intersectional social factors in design, and develop and prototype their own ideas with these principles in mind. 

In this interview, Schiebinger discusses the importance of inclusive design in AI, the tools she developed to help achieve inclusive design, and her recommendations for making inclusive design a part of the product development process. 

Your course – Innovations in Inclusive Design – explores a variety of concepts and principles in inclusive design. What does the term inclusive design mean?

intersectional design cards

It’s design that works for everyone across all of society. If inclusive design is the goal, then intersectional tools are what get you there. We developed intersectional design cards that cover a variety of social factors like sexuality, geographic location, race and ethnicity, and socioeconomic status (the cards won notable distinction at the 2022 Core77 Design Awards). These are factors where we see social inequalities show up, especially in the U.S. and Western Europe. These cards help design teams see which populations they might not have considered, so they don’t design for an abstract, non-existing person. The social factors in our cards are by no means an exhaustive list, so we also include blank cards and invite people to create their own factors. The goal in inclusive design is to get away from designing for the default, mid-sized male, and to consider the full range of users. 

Why is inclusive design important to product development in AI? What are the risks of developing AI technologies that are not inclusive? 

If you don’t have inclusive design, you’re going to reaffirm, amplify, and harden unconscious biases. Take nursing robots, as an example. The nursing robot’s goal is to get patients to comply with health care instructions, whether that’s doing exercises or taking medication. Human-robot interaction shows us that people interact more with robots that are humanoid, and we also know that nurses are 90% women in real life. Does this mean we get better patient compliance if we feminize nursing robots? Perhaps, but if you do that, you also harden the stereotype that nursing is a woman’s profession, and you close out the men who are interested in nursing. Feminizing nursing robots exacerbates those stereotypes. One interesting idea promotes robot neutrality where you don’t anthropomorphize the robot, and you keep it out of human space. But does this reduce patient compliance? 

Essentially, we want designers to think about the social norms that are involved in human relations and to question those norms. Doing so will help them create products that embody a new configuration of social norms, engendering what I like to call a virtuous circle – a process of cultural change that is more equitable, sustainable, and inclusive. 

What technology product does a poor job of being inclusive?

The pulse oximeter, which was developed in 1972, was so important during the early days of COVID as the first line of defense in emergency rooms. But we learned in 1989 that it doesn’t give accurate oxygen saturation readings for people with darker skin. If a patient doesn’t desaturate to 88% by the pulse oximeter’s reading, they may not get the life-saving oxygen they need. And even if they do get supplemental oxygen, insurance companies don’t pay unless you reach a certain reading. We’ve known about this product failure for decades, but it somehow didn’t become a priority to fix. I’m hoping that the experience of the pandemic will prioritize this important fix, because the lack of inclusivity in the technology is causing failures in health care. 

We’ve also used virtual assistants as a key example in our class for several years now, because we know that voice assistants that default to a female persona are subjected to harassment and because they again reinforce the stereotype that assistants are female. There’s also a huge challenge with voice assistants misunderstanding African American vernacular or people who speak English with an accent. In order to be more inclusive, voice assistants need to work for people with different educational backgrounds, from different parts of the country, and from different cultures. 

What’s an example of an AI product with great, inclusive design?

The positive example I like to give is facial recognition. Computer scientists Joy Buolamwini and Timnit Gebru wrote a paper called “Gender Shades,” in which they found that women’s faces were not recognized as well as men’s faces, and darker-skinned people were not recognized as easily as those with lighter skin. But then they did the intersectional analysis and found that Black women were not seen 35% of the time. Using what I call “intersectional innovation,” they created a new data set using parliamentary members from Africa and Europe and built an excellent, more inclusive database for Blacks, whites, men, and women. But we notice that there is still room for improvement; the database could be expanded to include Asians, Indigenous people of the Americas and Australia, and possibly nonbinary or transgender people.

For inclusive design we have to be able to manipulate the database. If you’re doing natural language processing and using the corpus of the English language found online, then you’re going to get the biases that humans have put into that data. There are databases we can control and make work for everybody, but for databases we can’t control, we need other tools so the algorithm does not return biased results.

In your course, students are first introduced to inclusive design principles before being tasked with designing and prototyping their own inclusive technologies. What are some of the interesting prototypes in the area of AI that you’ve seen come out of your class? 

During our social robots unit, a group of students created a robot called ReCyclops that solves for 1) not knowing what plastics should go into each recycle bin, and 2) the unpleasant labor of workers sorting through the recycling to determine what is acceptable.

ReCyclops can read the label on an item or listen to a user’s voice input to determine which bin the item goes into. The robots are placed in geographically logical and accessible locations – attaching to existing waste containers – in order to serve all users within a community. 

How would you recommend that AI professional designers and developers consider inclusive design factors throughout the product development process? 

I think we should first do a sustainability life cycle assessment to ensure that the computing power required isn’t contributing to climate change. Next, we need to do a social life cycle assessment that scrutinizes working conditions for people in the supply chain. And finally, we need an inclusive life cycle assessment to make sure the product works for everyone. If we slow down and don’t break things, we can accomplish this. 

With these assessments, we can use intersectional design to create inclusive technologies that enhance social equity and environmental sustainability.

Innovations in Inclusive Design is taught by Londa Schiebinger, the John L. Hinds Professor in the History of Science in Stanford’s School of Humanities and Sciences; Ann Grimes, director of journalism fellowships in the Starling Lab for Data Integrity at Stanford and the University of Southern California; design researcher and d.school teaching fellow Dr. Hannah Jones; and Andrea Small, creative concept lead, storytelling and design thinking at Samsung Research America. 

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.

 

More News Topics