Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Walking the Walk of AI Ethics in Technology Companies | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Walking the Walk of AI Ethics in Technology Companies

Date
December 07, 2023
Topics
Ethics, Equity, Inclusion
Industry, Innovation
Read Paper
abstract

This brief presents one of the first empirical investigations into AI ethics on the ground in private technology companies.

Key Takeaways

  • Technology companies often “talk the talk” of AI ethics without fully “walking the walk.” Many companies have released AI principles, but relatively few have institutionalized meaningful change.

  • We interviewed 25 AI ethics practitioners and found that there are significant roadblocks to implementing companies’ stated goals regarding AI ethics.

  • AI ethics and fairness considerations are championed by individuals who lack institutional support, rarely made a priority in product development cycles, disincentivized by metrics, and disrupted by the frequent reorganization of teams.

  • Government regulation could play a crucial role in helping the AI ethics field move toward formalization by incentivizing leaders to prioritize ethical issues and protecting AI ethics workers.

Executive Summary

The field of AI ethics has grown rapidly in industry and academia, in large part due to the “techlash” brought about by technology industry scandals such as Cambridge Analytica and growing congressional attention to technology giants’ data privacy and other internal practices. In recent years, technology companies have published AI principles, hired social scientists to conduct research and compliance, and employed engineers to develop technical solutions related to AI ethics and fairness. Despite these new initiatives, many private companies have not yet prioritized the adoption of accountability mechanisms and ethical safeguards in the development of AI. Companies often “talk the talk” of AI ethics but rarely “walk the walk” by adequately resourcing and empowering teams that work on responsible AI.

In our paper, “Walking the Walk of AI Ethics,” we present one of the first empirical investigations into AI ethics on the ground in a (thus far) fairly unregulated environment within the technology sector. Our interviews with AI ethics workers in the private sector uncovered several significant obstacles to implementing AI ethics initiatives. Practitioners struggle to have their companies foreground ethics in an environment centered around software product launches. Ethics are difficult to quantify and easy to de-prioritize in a context where company goals are incentivized by metrics. And the frequent reorganization of teams at technology companies makes it challenging for AI ethics workers to access institutional knowledge and maintain relationships central to their work.

Our research highlights the stark gap between company policy and practice when it comes to AI ethics. It captures the difficulties of institutionalizing change within technology companies and illustrates the important role of regulation in incentivizing companies to make AI ethics initiatives a priority.

Introduction

Previous research has criticized corporate AI ethics principles for being toothless and vague, while questioning some of their underlying assumptions. However, relatively few studies have examined the implementation of AI ethics initiatives on the ground, let alone the organizational dynamics that contribute to the lack of progress.

Our paper builds on existing research by drawing on theories of organizational change to shed light on the ways that AI ethics workers operate in technology companies. In response to outside pressure, such as regulation and public backlash, many organizations develop policies and practices to gain legitimacy; however, these measures often do not achieve their intended outcome as there is a disconnect between means and ends. New practices may also go against the organization’s established rules and procedures.

AI ethics initiatives suffer from the same dynamic: Many technology companies have released AI principles, but relatively few have made significant adjustments to their operations as a result. With little buy-in from senior leadership, AI ethics workers take on the responsibility of organizational change by using persuasive strategies and diplomatic skills to convince engineers and product managers to incorporate ethical considerations in product development. Technology companies also seek to move quickly and release products regularly to generate investment and to outpace competitors, meaning that products are often released despite ethical concerns. Responsible AI teams may be siloed within large organizations, preventing their work from becoming integral to the core tasks of the organization.

To better understand the concrete organizational barriers to the implementation of AI ethics initiatives, we conducted a qualitative study of responsible AI initiatives within technology companies. We interviewed 25 AI ethics practitioners, including employees, academics, and consultants—many of whom are currently or were formerly employed as part of technology companies’ responsible AI initiatives—in addition to gathering observations from industry workshops and training programs. Our resulting analysis provides insight into the significant structural risks workers face when they advocate for ethical AI as well as the hurdles they encounter when incorporating AI ethics into product development.

This work was funded in part by a seed research grant from the Stanford Institute for Human-Centered Artificial Intelligence.

Read Paper
Share
Link copied to clipboard!
Authors
  • Sanna J. Ali
    Sanna J. Ali
  • Angèle Christin
    Angèle Christin
  • Andrew Smart
    Andrew Smart
  • Riitta Katila
    Riitta Katila

Related Publications

Moving Beyond the Term "Global South" in AI Ethics and Policy
Evani Radiya-Dixit, Angèle Christin
Quick ReadNov 19, 2025
Issue Brief

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Issue Brief

Moving Beyond the Term "Global South" in AI Ethics and Policy

Evani Radiya-Dixit, Angèle Christin
Ethics, Equity, InclusionInternational Affairs, International Security, International DevelopmentQuick ReadNov 19

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Increasing Fairness in Medicare Payment Algorithms
Marissa Reitsma, Thomas G. McGuire, Sherri Rose
Quick ReadSep 01, 2025
Policy Brief

This brief introduces two algorithms that can promote fairer Medicare Advantage spending for minority populations.

Policy Brief

Increasing Fairness in Medicare Payment Algorithms

Marissa Reitsma, Thomas G. McGuire, Sherri Rose
Ethics, Equity, InclusionHealthcareQuick ReadSep 01

This brief introduces two algorithms that can promote fairer Medicare Advantage spending for minority populations.

Mind the (Language) Gap: Mapping the Challenges of LLM Development in Low-Resource Language Contexts
Juan Pava, Caroline Meinhardt, Haifa Badi Uz Zaman, Toni Friedman, Sang T. Truong, Daniel Zhang, Elena Cryst, Vukosi Marivate, Sanmi Koyejo
Deep DiveApr 22, 2025
White Paper

This white paper maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership.

White Paper

Mind the (Language) Gap: Mapping the Challenges of LLM Development in Low-Resource Language Contexts

Juan Pava, Caroline Meinhardt, Haifa Badi Uz Zaman, Toni Friedman, Sang T. Truong, Daniel Zhang, Elena Cryst, Vukosi Marivate, Sanmi Koyejo
International Affairs, International Security, International DevelopmentNatural Language ProcessingEthics, Equity, InclusionDeep DiveApr 22

This white paper maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership.

Response to NSF’s Request for Information on Research Ethics
Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, Michael S. Bernstein, Margaret Levi, David Magnus, Debra Satz
Nov 22, 2024
Response to Request

Stanford scholars respond to a federal RFI related to research ethics, sharing lessons from their experience operating an ethical reflection process for research grants.

Response to Request

Response to NSF’s Request for Information on Research Ethics

Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, Michael S. Bernstein, Margaret Levi, David Magnus, Debra Satz
Ethics, Equity, InclusionSciences (Social, Health, Biological, Physical)Nov 22

Stanford scholars respond to a federal RFI related to research ethics, sharing lessons from their experience operating an ethical reflection process for research grants.