Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Danaë Metaxa: Algorithms Change How We Think About the World and Ourselves | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Danaë Metaxa: Algorithms Change How We Think About the World and Ourselves

Date
January 04, 2021
Topics
Machine Learning

The Stanford Ph.D student and first-generation American examines algorithmic representation and its impact on our sense of belonging.

In this Students of AI series, we ask Stanford students what drew them to this field, their hopes and fears for the technology’s future, and what inspires them.

Meet Danaë Metaxa, Ph.D. Computer Science 2020:

I was born in the Boston area, and I’m the first-generation American in my family. My parents are immigrants, and my first language was Greek, so from a pretty young age I was thinking about the way that different identity categories, like one’s national origin or gender, influence your path through life, your interests, and the things that seem thinkable or accessible to you. I’m also queer and identify as non-binary, so that’s another thing I thought about growing up – how that’s influenced my life. As an undergrad at Brown University, I was interested in computer science, yet expected to steer away from it. But I found that the problem-solving mindset, algorithmic thinking, and that way of looking at the world was really compelling to me. At the same time, I got a second major in science technology studies with an emphasis on gender and technology, so everything was revolving around the ideas of diversity and representation, and belonging and bias, in technology. I was really fortunate to find human-computer interaction, which combines these different interests and encourages a critical lens on technology. As an undergrad, I started thinking about the idea that something generally harmless - like the design of a computer science course page – might have unconscious influence on those interacting with it.  That’s exactly what I found during one of the first experiments I ran as a graduate student; that something as simple as the aesthetics of an interface can have a negative effect on whether women feel they belong in a certain class, can succeed there, or are interested in taking computer science at all. I began thinking more about the content that people are exposed to, and the unconscious effects it might have.

I’ve been doing a lot of work over the past couple of years in an area called algorithm audits, which are essentially a method for studying algorithmic content. By repeatedly querying some algorithm, then monitoring the output and comparing it to other queries or other days, we can draw inferences about what kind of content that algorithm is serving, and why. Most recently we’re looking at the images Google Images shows a user searching for content about popular occupations - like pilots, engineers, and nurses - and what races and genders are represented in those images. We find the search results underrepresent both women and people of color who actually participate in those occupations, and also that women and people of color are more likely to feel alienated when people like them are underrepresented online. Algorithms matter, because the version of the world they portray changes not only how we think about the world, but also how we think about ourselves and our own potential. This is important, because our decisions now, from who we’re going to vote for, to what courses we’ll take, to where we’re going to dinner, are being made based on algorithmically mediated content, and as a byproduct of our interactions with that content and those algorithms. It’s really critical to understand what content we’re being exposed to and why, and what effect that all has on us at both the individual and social level.

My work feels more motivating, more relevant and more important in context today, when all of a sudden all of society is becoming attuned to issues of representation and inequality. It’s really rewarding and powerful to be doing work that I know directly affects people, and can speak to their experiences every day. It doesn’t feel like I’m in some ivory tower working on things that may never see the light of day. It feels like the work is applied, and it’s important right now.

— Story as told to Beth Jensen.

 

Share
Link copied to clipboard!
Authors
  • Danaë Metaxa
    Danaë Metaxa
Related
  • Cody Coleman: Lowering Machine Learning’s Barriers To Help People Tackle Real Problems
    Cody Coleman
    Jan 04
    news

    This Stanford Ph.D. candidate's low-income upbringing inspired his focus — democratizing AI. 

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Spatial Intelligence Is AI’s Next Frontier
TIME
Dec 11, 2025
Media Mention

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Media Mention
Your browser does not support the video tag.

Spatial Intelligence Is AI’s Next Frontier

TIME
Computer VisionMachine LearningGenerative AIDec 11

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.