Skip to main content Skip to secondary navigation
Page Content

How Unique Are the Risks Posed by Artificial Intelligence?

Researchers say AI risk shouldn’t be siloed from other technological risk and propose updating existing frameworks rather than inventing new ones.

 

Image
People with blurred faces walking by on a street

Artificial intelligence's harm to society is real - we're seeing its bias in everything from facial recognition to hiring. How can developers identify and mitigate the dangers of their work? | iStock/Gabriel Pevide

Earlier this year, Congress instructed the National Institute of Standards and Technology (NIST) to come up with a voluntary risk framework for artificial intelligence systems.

How should AI developers make sure their systems don’t harm society? How should they ensure the algorithms don’t incorporate hidden biases — an issue that has surfaced in facial recognition, judicial sentencing, and even hiring? How do they reduce the risks of crucial systems being hacked and manipulated?

A risk framework is an important undertaking. If it’s successful, it will give developers and users of artificial intelligence a process and a road map for identifying and minimizing potential dangers. Though the federal guidance won’t be a mandate, it could have enormous influence in establishing de facto standards and best practices.

NIST, working with stakeholders from business, government, academia, and civic organizations, has already established an elaborate risk framework for cybersecurity and privacy protection in more traditional information systems. But does artificial intelligence pose fundamentally different risks than the digital systems already being used in power plants, electrical grids, global navigation, or financial services?

Policy experts at Stanford and other institutions say NIST should start by updating existing frameworks rather than reflexively creating new ones. While artificial intelligence can both take on extraordinary new tasks and go wrong in novel ways, they say, it’s a mistake to think that this requires an entirely separate silo for AI risks. For one thing, AI models are likely to work alongside and even be embedded in traditional information systems.

“This is a revolutionary technology, but many of the risks are the same,” says Andrew Grotto, a research fellow at Stanford’s Freeman Spogli Institute for International Studies who served as the White House’s senior director of cybersecurity policy under both President Obama and President Trump. “Think of automated vehicles and how to apportion liability in crashes. It’s a big challenge, but it’s not the first time we’ve dealt with automated systems. We have liability regimes, though they may need to be updated.”

Grotto has direct experience with the subject: Under President Obama, he helped guide the development of the NIST Cybersecurity Framework. That exercise, he says, offers valuable lessons for dealing with AI risks.

Writing with Gregory Falco and Iliana Maifeld-Carucci of Johns Hopkins University, Grotto recently urged NIST to treat artificial intelligence issues as an extension of its existing risk guidance on information systems in general. Falco joined the Hopkins faculty this year after finishing a postdoctoral fellowship at FSI’s Program on Geopolitics, Technology, and Governance, which Grotto founded and leads.

“We recommend that NIST and the broader community of stakeholders … adopt a rebuttable presumption that AI risks are extensions of risks associated with non-AI technology, unless proven otherwise,” they wrote.

Pinpointing the Gaps

Grotto and his colleagues agree that artificial intelligence systems will indeed pose some new challenges. As a result, they say, the first priority should be to identify where the existing risk frameworks fall short.

In cybersecurity, for example, the standard process now is to identify and catalog hacking vulnerabilities and rate each on its severity. That enables software vendors and users to install patches. In artificial intelligence, by contrast, the weakness can be inherent in how a machine-learning model teaches itself to carry out particular tasks, such as screening facial images. If hackers figure out the basic model, they can trick it by feeding it slightly distorted data. In fact, one team of AI researchers recently argued that the vulnerabilities are often “features” rather than “bugs” in the software. Grotto and Jim Dempsey, who recently retired as executive director of the Center for Law and Technology at Berkeley Law School, explored vulnerability disclosure and management for AI/ML systems in a recent white paper.

In general, however, Grotto argues that AI risks are broadly similar to those of earlier technologies.

“There’s a great deal of concern about algorithmic bias that harms people because of their race, ethnicity, or gender,” he notes. “But that’s been a risk in credit score reporting and finance long before companies used AI.”

Grotto says there’s a growing consensus about the key principles that should underlie AI risk management. Those include the need to protect privacy, assure accountability, and promote the “explainability” of AI decisions.

But rather than make things unnecessarily complicated by creating a stand-alone framework for artificial intelligence, he says, it’s better to build and expand on the work that’s already been done.

“It’s not like AI systems will appear one day and all the other systems will disappear,” he says. “The AI systems will work alongside and be embedded in the existing systems. The goal should be to mainstream AI risk management, not put it in a separate silo.”

Grotto plans a multi-year research agenda on this theme, with future work examining how risk management tools and concepts such as software bills of material, zero trust architecture, security ratings, and conformity assessment could be applied and adapted to AI/ML systems.

This research is partially funded by the National Institute of Standards and Technology (NIST).

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics