Skip to main content Skip to secondary navigation
Page Content

Computer Scientists Can’t Treat Social and Ethical Impacts as an Afterthought

A new National Academies of Science report argues that researchers must start projects with ethical review, working with stakeholders and experts from other fields.

Image
Illustration of a young scholar holding a globe and looking thought, created by DALL-E.

Researchers have a responsibility to mitigate problems from their inventions and make users aware of the potential pitfalls. | Image developed by DALL-E. 

 

When people sound alarms about ethical and social pitfalls in computing, especially artificial intelligence, they are often reacting to systems that are already in use. How should a social media platform handle algorithms that amplify hate speech and misinformation? Do systems that evaluate creditworthiness or job applications have hidden racial or gender biases? Does facial recognition jeopardize privacy?

But a new report from an advisory committee to the National Academies of Science, whose members include John Hennessy, the former president of Stanford and an advisor to Stanford HAI, argues that computer researchers and the institutions that fund them need to anticipate social and ethical risks long before they have a product.

If they don’t, the report warns, it may be too late.

“It is much easier to design a technology correctly from the start than it is to fix it later,” the report warns. “Failure to consider the consequences early in research increases the risk of adverse societal or ethical impacts.”

Read the full report, Fostering Responsible Computing Research: Foundations and Practices Report Release.

 

That may sound obvious, but the authors — including luminaries in computer science, social science, and philosophy — say it requires a broad rethink by the institutions that fund and carry out research: universities, corporations, professional societies, and the government.

In part, that means reaching out early to stakeholders as well as to experts in social sciences, ethics, and moral reasoning. It also means thinking early and hard about the unexpected ways that a new technology might be used or misused.

“One of the difficulties is that computer technologies, especially these foundation models, are universal technologies that can be used for all kinds of things that the developers never intended,” says Hennessy, a computer scientist and professor at Stanford who is also currently chairman of Alphabet, the parent company of Google. “You can’t prevent all the misuses, but you can at least provide a caveat that people can use as guidelines.”

Chain of Responsibility

The report cites the example of third-party “cookies,” the little markers that track actions of a user on a website. The original intent was simply to make transactions like online shopping easier, but cookies quickly became tools that data harvesters used to track users and their web activity. If researchers had considered privacy issues at the outset, Hennessey says, they could have built in more protection before cookies became a universal standard.

The open-ended potential of AI and other computer technology makes them different from innovations in most other fields. A new vaccine may have unwanted side effects, Hennessy notes, but it will only be used for a very targeted purpose. A new algorithm or piece of code becomes a tool that can be used for entirely new purposes.

“The chain of responsibility begins at the research stage,” he says. “Researchers have a responsibility for trying to mitigate such problems but also to make users aware of the potential pitfalls — whether a technology is used the way it was intended or, as so often happens, far beyond the initial intentions.”

Recommendations for Researchers

The report recommends several ways to instill social and ethical concerns even in early-stage research.

Government funding agencies, which finance a lot of computer research, can insist that all proposals address such potential risks. Stanford HAI’s funding process requires exactly that kind of ethical and social review. Likewise, professional societies and academic journals can insist that any new published research includes a thorough discussion of potential problems.

More broadly, the report says, research institutions should give computer scientists access to experts from other fields who can offer a broader perspective on potential problems.

“Until relatively recently, many researchers and observers considered computing technologies to be value neutral,” the report notes. “Few, if any, are. The design of new computing technologies … is always imprinted with the spectrum of values considered by the designer, which may not be broad enough to ensure a particular technology meets the needs of some stakeholders.”

“The goal of ethical grounding in technology isn’t to survey every single problem that could possibly occur,’’ Hennessy says. “It’s to make you aware of these issues so that when you encounter a circumstance where there is a potential ethical trade-off, you’ll be cognizant of it and deal with it.”

The NAS panel that produced the report was chaired by Barbara Grosz, a computer scientist at Harvard who is also an honorary member of Stanford’s Institute for Human-Centered Artificial Intelligence. In addition to Hennessy, other panel members included Mark Ackerman, professor human-computer interaction at the University of Michigan; Steve Bellovin, professor of computer science at Columbia University; David Danks,  professor of philosophy and data science at the University of California at San Diego; Mariano-Florentino Cuellar, president of the Carnegie Endowment for International Peace; Megan Finn of the University of Washington; Mary Gray of Microsoft Research; Ayanna Howard of Ohio State University; Jon Kleinberg of Cornell University; James Manyika of the McKinsey Global Institute; James Mickens of Harvard University; and Amanda Stent of Colby College.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.

More News Topics