The Stanford Ph.D student and first-generation American examines algorithmic representation and its impact on our sense of belonging.
In this Students of AI series, we ask Stanford students what drew them to this field, their hopes and fears for the technology’s future, and what inspires them.
Meet Danaë Metaxa, Ph.D. Computer Science 2020:
I was born in the Boston area, and I’m the first-generation American in my family. My parents are immigrants, and my first language was Greek, so from a pretty young age I was thinking about the way that different identity categories, like one’s national origin or gender, influence your path through life, your interests, and the things that seem thinkable or accessible to you. I’m also queer and identify as non-binary, so that’s another thing I thought about growing up – how that’s influenced my life. As an undergrad at Brown University, I was interested in computer science, yet expected to steer away from it. But I found that the problem-solving mindset, algorithmic thinking, and that way of looking at the world was really compelling to me. At the same time, I got a second major in science technology studies with an emphasis on gender and technology, so everything was revolving around the ideas of diversity and representation, and belonging and bias, in technology. I was really fortunate to find human-computer interaction, which combines these different interests and encourages a critical lens on technology. As an undergrad, I started thinking about the idea that something generally harmless - like the design of a computer science course page – might have unconscious influence on those interacting with it. That’s exactly what I found during one of the first experiments I ran as a graduate student; that something as simple as the aesthetics of an interface can have a negative effect on whether women feel they belong in a certain class, can succeed there, or are interested in taking computer science at all. I began thinking more about the content that people are exposed to, and the unconscious effects it might have.
I’ve been doing a lot of work over the past couple of years in an area called algorithm audits, which are essentially a method for studying algorithmic content. By repeatedly querying some algorithm, then monitoring the output and comparing it to other queries or other days, we can draw inferences about what kind of content that algorithm is serving, and why. Most recently we’re looking at the images Google Images shows a user searching for content about popular occupations - like pilots, engineers, and nurses - and what races and genders are represented in those images. We find the search results underrepresent both women and people of color who actually participate in those occupations, and also that women and people of color are more likely to feel alienated when people like them are underrepresented online. Algorithms matter, because the version of the world they portray changes not only how we think about the world, but also how we think about ourselves and our own potential. This is important, because our decisions now, from who we’re going to vote for, to what courses we’ll take, to where we’re going to dinner, are being made based on algorithmically mediated content, and as a byproduct of our interactions with that content and those algorithms. It’s really critical to understand what content we’re being exposed to and why, and what effect that all has on us at both the individual and social level.
My work feels more motivating, more relevant and more important in context today, when all of a sudden all of society is becoming attuned to issues of representation and inequality. It’s really rewarding and powerful to be doing work that I know directly affects people, and can speak to their experiences every day. It doesn’t feel like I’m in some ivory tower working on things that may never see the light of day. It feels like the work is applied, and it’s important right now.
— Story as told to Beth Jensen.