Skip to main content Skip to secondary navigation
Page Content

Kathleen Creel: Examining Ethical Questions in AI

Stanford’s Embedded EthiCS fellow seeks to understand ethical questions posed by emerging technologies with an interdisciplinary approach.

Image
Katie Creel poses for a portrait in Stanford University

Kathleen Creel is finishing a postdoctoral fellowship, where she teaches in and creates course material for a program designed to embed ethical considerations into the computer science curriculum. | Christine Baker

It’s safe to say that Plato and his contemporaries never grappled with moral questions raised by the development of neural networks or issues surrounding data privacy and security. But a few modern philosophers – like Kathleen Creel – are doing just that as they harness age-old ideas about knowledge, existence, and ethics to understand and respond to the challenges posed by today’s technology. 

“I still get a lot from Plato and other historical philosophers, but the task of philosophy is to figure out what the questions of a particular age, of a particular society or culture are, and to ask how philosophy can help to address them,” Creel says. “It gives us a clearer moral system to help sort through what our priorities ought to be, and how we should act in our lives.”

Creel is finishing a two-year Embedded EthiCS Postdoctoral Fellowship based at Stanford’s McCoy Family Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence (HAI). The fellowship was an irresistible draw to Creel, who has long been attracted to the order and clarity of each field.

“Beginning in college, I was drawn to computer science and philosophy, because they both use logic and analysis to reduce what philosophers sometimes call ‘the blooming, buzzing confusion’ of the world into something that’s manageable,” Creel says. “These two fields felt similar, because they both allow me to get to the heart of things; to learn truths that are both necessary and that relate to things in the world that I care about.”

Embedding Ethics into AI Research

After graduation, Creel briefly set philosophy aside, but soon found herself reconsidering its potential.

“After I graduated from college, I thought I was going to be a software engineer, and I did that for a while,” she says. “I was going to leave philosophy behind, but I missed it, and I also began to realize that the large-scale software projects I was working on presented interesting and important issues involving transparency, explainability, and trust. Philosophers have developed tools over millennia to think about these issues, and we can bring some of these established tools to AI, so computer scientists don’t have to reinvent them from scratch.”

As part of her fellowship, Creel teaches in and creates course material for Stanford’s Embedded EthiCS program, which is reworking computer science courses to fully integrate ethical considerations into each curriculum. She’s also pursuing her own research, collaborating with interdisciplinary scholars from HAI’s Center for Research on Foundation Models (CRFM) to study the negative impact of algorithmic monoculture on individuals. When the same machine learning model is used for high-stakes decisions in many settings, its strengths, biases, and idiosyncrasies can all be passed down to a wide range of subsequent applications. If the same person encounters similar models time after time, Creel says, that individual could be wrongly and repeatedly denied access to employment, loans, and other essential opportunities.

Watch: Picking on the Same Person: Does Algorithmic Monoculture Homogenize Outcomes?


“We’re asking how often individuals are affected by this, and how we can determine if systematic exclusion is happening in a particular system more than it would simply by chance,” Creel says. “If the risk of algorithmic monoculture is concentrated on the same people repeatedly, that’s deeply unfair. As an ethicist, I want to know how we can design systems that won’t arbitrarily exclude people.”

Image
Katie Creel poses for a portrait in Stanford University.

Kathleen Creel works with scholars from a wide range of fields to study the negative impact of algorithmic monoculture on individuals. | Christine Baker

Leading an Interdisciplinary Life

Working at HAI is a rare opportunity to interact with colleagues from a wide range of fields, Creel says.

“I’m working right now with machine learning researchers, computer scientists, and a labor economist; that’s the kind of project HAI’s roof can shelter,” Creel says. “Everyone is respectful of the disciplinary expertise of their colleagues. But there are challenges; we’re all trained to ask different kinds of questions, and we have to figure out how to pursue a course of research that satisfies all our questions at the same time. Doing that, however, means that we end up with a broader and richer look at whatever we’re studying.” 

Creel hopes her research at HAI will help those in a range of fields identify algorithmic monoculture in their own data, and understand why it needs to be addressed. She’ll continue her focus on ethics in technology this fall when she begins a new position as an assistant professor at Northeastern University, where she will hold joint appointments in the College of Social Sciences and Humanities and Khoury College of Computer Sciences.

“I’m feeling lucky to be able to continue this interdisciplinary life, and to keep straddling these two fields that I love,” she says.

This article is part of the People of HAI series which spotlights our community of scholars, faculty, students, and staff coming from different backgrounds and disciplines.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

More News Topics