Skip to main content Skip to secondary navigation
Page Content

Policy Brief

Image
ai-ethics
December 7, 2023

Walking the Walk of AI Ethics in Technology Companies

Sanna J. Ali, Angèle Christin, Andrew Smart, Riitta Katila

In this brief, Stanford scholars present one of the first empirical investigations into AI ethics on the ground in private technology companies.

Key Takeaways

➜ Technology companies often “talk the talk” of AI ethics without fully “walking the walk.” Many companies have released AI principles, but relatively few have institutionalized meaningful change.

➜ We interviewed 25 AI ethics practitioners and found that there are significant roadblocks to implementing companies’ stated goals regarding AI ethics.

➜ AI ethics and fairness considerations are championed by individuals who lack institutional support, rarely made a priority in product development cycles, disincentivized by metrics, and disrupted by the frequent reorganization of teams.

➜ Government regulation could play a crucial role in helping the AI ethics field move toward formalization by incentivizing leaders to prioritize ethical issues and protecting AI ethics workers.

Executive Summary

The field of AI ethics has grown rapidly in industry and academia, in large part due to the “techlash” brought about by technology industry scandals such as Cambridge Analytica and growing congressional attention to technology giants’ data privacy and other internal practices. In recent years, technology companies have published AI principles, hired social scientists to conduct research and compliance, and employed engineers to develop technical solutions related to AI ethics and fairness. Despite these new initiatives, many private companies have not yet prioritized the adoption of accountability mechanisms and ethical safeguards in the development of AI. Companies often “talk the talk” of AI ethics but rarely “walk the walk” by adequately resourcing and empowering teams that work on responsible AI.

In our paper, “Walking the Walk of AI Ethics,” we present one of the first empirical investigations into AI ethics on the ground in a (thus far) fairly unregulated environment within the technology sector. Our interviews with AI ethics workers in the private sector uncovered several significant obstacles to implementing AI ethics initiatives. Practitioners struggle to have their companies foreground ethics in an environment centered around software product launches. Ethics are difficult to quantify and easy to de-prioritize in a context where company goals are incentivized by metrics. And the frequent reorganization of teams at technology companies makes it challenging for AI ethics workers to access institutional knowledge and maintain relationships central to their work.

Our research highlights the stark gap between company policy and practice when it comes to AI ethics. It captures the difficulties of institutionalizing change within technology companies and illustrates the important role of regulation in incentivizing companies to make AI ethics initiatives a priority.

Ethics are difficult to quantify and easy to de-prioritize in a context where company goals are incentivized by metrics.

Introduction

Previous research has criticized corporate AI ethics principles for being toothless and vague, while questioning some of their underlying assumptions. However, relatively few studies have examined the implementation of AI ethics initiatives on the ground, let alone the organizational dynamics that contribute to the lack of progress.

Our paper builds on existing research by drawing on theories of organizational change to shed light on the ways that AI ethics workers operate in technology companies. In response to outside pressure, such as regulation and public backlash, many organizations develop policies and practices to gain legitimacy; however, these measures often do not achieve their intended outcome as there is a disconnect between means and ends. New practices may also go against the organization’s established rules and procedures.

AI ethics initiatives suffer from the same dynamic: Many technology companies have released AI principles, but relatively few have made significant adjustments to their operations as a result. With little buy-in from senior leadership, AI ethics workers take on the responsibility of organizational change by using persuasive strategies and diplomatic skills to convince engineers and product managers to incorporate ethical considerations in product development. Technology companies also seek to move quickly and release products regularly to generate investment and to outpace competitors, meaning that products are often released despite ethical concerns. Responsible AI teams may be siloed within large organizations, preventing their work from becoming integral to the core tasks of the organization.

To better understand the concrete organizational barriers to the implementation of AI ethics initiatives, we conducted a qualitative study of responsible AI initiatives within technology companies. We interviewed 25 AI ethics practitioners, including employees, academics, and consultants—many of whom are currently or were formerly employed as part of technology companies’ responsible AI initiatives—in addition to gathering observations from industry workshops and training programs. Our resulting analysis provides insight into the significant structural risks workers face when they advocate for ethical AI as well as the hurdles they encounter when incorporating AI ethics into product development.

This work was funded in part by a seed research grant from the Stanford Institute for Human-Centered Artificial Intelligence.

Read the full brief    View all Policy Publications

 

Authors