Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Kathleen Creel: Examining Ethical Questions in AI | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Kathleen Creel: Examining Ethical Questions in AI

Date
August 17, 2022
Your browser does not support the video tag.

Stanford’s Embedded EthiCS fellow seeks to understand ethical questions posed by emerging technologies with an interdisciplinary approach.

It’s safe to say that Plato and his contemporaries never grappled with moral questions raised by the development of neural networks or issues surrounding data privacy and security. But a few modern philosophers – like Kathleen Creel – are doing just that as they harness age-old ideas about knowledge, existence, and ethics to understand and respond to the challenges posed by today’s technology. 

“I still get a lot from Plato and other historical philosophers, but the task of philosophy is to figure out what the questions of a particular age, of a particular society or culture are, and to ask how philosophy can help to address them,” Creel says. “It gives us a clearer moral system to help sort through what our priorities ought to be, and how we should act in our lives.”

Creel is finishing a two-year Embedded EthiCS Postdoctoral Fellowship based at Stanford’s McCoy Family Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence (HAI). The fellowship was an irresistible draw to Creel, who has long been attracted to the order and clarity of each field.

“Beginning in college, I was drawn to computer science and philosophy, because they both use logic and analysis to reduce what philosophers sometimes call ‘the blooming, buzzing confusion’ of the world into something that’s manageable,” Creel says. “These two fields felt similar, because they both allow me to get to the heart of things; to learn truths that are both necessary and that relate to things in the world that I care about.”

Embedding Ethics into AI Research

After graduation, Creel briefly set philosophy aside, but soon found herself reconsidering its potential.

“After I graduated from college, I thought I was going to be a software engineer, and I did that for a while,” she says. “I was going to leave philosophy behind, but I missed it, and I also began to realize that the large-scale software projects I was working on presented interesting and important issues involving transparency, explainability, and trust. Philosophers have developed tools over millennia to think about these issues, and we can bring some of these established tools to AI, so computer scientists don’t have to reinvent them from scratch.”

As part of her fellowship, Creel teaches in and creates course material for Stanford’s Embedded EthiCS program, which is reworking computer science courses to fully integrate ethical considerations into each curriculum. She’s also pursuing her own research, collaborating with interdisciplinary scholars from HAI’s Center for Research on Foundation Models (CRFM) to study the negative impact of algorithmic monoculture on individuals. When the same machine learning model is used for high-stakes decisions in many settings, its strengths, biases, and idiosyncrasies can all be passed down to a wide range of subsequent applications. If the same person encounters similar models time after time, Creel says, that individual could be wrongly and repeatedly denied access to employment, loans, and other essential opportunities.

Watch: Picking on the Same Person: Does Algorithmic Monoculture Homogenize Outcomes?


“We’re asking how often individuals are affected by this, and how we can determine if systematic exclusion is happening in a particular system more than it would simply by chance,” Creel says. “If the risk of algorithmic monoculture is concentrated on the same people repeatedly, that’s deeply unfair. As an ethicist, I want to know how we can design systems that won’t arbitrarily exclude people.”

Leading an Interdisciplinary Life

Working at HAI is a rare opportunity to interact with colleagues from a wide range of fields, Creel says.

“I’m working right now with machine learning researchers, computer scientists, and a labor economist; that’s the kind of project HAI’s roof can shelter,” Creel says. “Everyone is respectful of the disciplinary expertise of their colleagues. But there are challenges; we’re all trained to ask different kinds of questions, and we have to figure out how to pursue a course of research that satisfies all our questions at the same time. Doing that, however, means that we end up with a broader and richer look at whatever we’re studying.” 

Creel hopes her research at HAI will help those in a range of fields identify algorithmic monoculture in their own data, and understand why it needs to be addressed. She’ll continue her focus on ethics in technology this fall when she begins a new position as an assistant professor at Northeastern University, where she will hold joint appointments in the College of Social Sciences and Humanities and Khoury College of Computer Sciences.

“I’m feeling lucky to be able to continue this interdisciplinary life, and to keep straddling these two fields that I love,” she says.

This article is part of the People of HAI series which spotlights our community of scholars, faculty, students, and staff coming from different backgrounds and disciplines.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

Share
Link copied to clipboard!
Contributor(s)
Beth Jensen

Related News

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances
Alex Pentland
Feb 06, 2026
News
close-up of a globe with pinpoints of lights coming out of all the countries

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

News
close-up of a globe with pinpoints of lights coming out of all the countries

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances

Alex Pentland
Feb 06

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.