How Unique Are the Risks Posed by Artificial Intelligence? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

How Unique Are the Risks Posed by Artificial Intelligence?

Date
December 20, 2021
Topics
Privacy, Safety, Security

Researchers say AI risk shouldn’t be siloed from other technological risk and propose updating existing frameworks rather than inventing new ones.

 

Earlier this year, Congress instructed the National Institute of Standards and Technology (NIST) to come up with a voluntary risk framework for artificial intelligence systems.

How should AI developers make sure their systems don’t harm society? How should they ensure the algorithms don’t incorporate hidden biases — an issue that has surfaced in facial recognition, judicial sentencing, and even hiring? How do they reduce the risks of crucial systems being hacked and manipulated?

A risk framework is an important undertaking. If it’s successful, it will give developers and users of artificial intelligence a process and a road map for identifying and minimizing potential dangers. Though the federal guidance won’t be a mandate, it could have enormous influence in establishing de facto standards and best practices.

NIST, working with stakeholders from business, government, academia, and civic organizations, has already established an elaborate risk framework for cybersecurity and privacy protection in more traditional information systems. But does artificial intelligence pose fundamentally different risks than the digital systems already being used in power plants, electrical grids, global navigation, or financial services?

Policy experts at Stanford and other institutions say NIST should start by updating existing frameworks rather than reflexively creating new ones. While artificial intelligence can both take on extraordinary new tasks and go wrong in novel ways, they say, it’s a mistake to think that this requires an entirely separate silo for AI risks. For one thing, AI models are likely to work alongside and even be embedded in traditional information systems.

“This is a revolutionary technology, but many of the risks are the same,” says Andrew Grotto, a research fellow at Stanford’s Freeman Spogli Institute for International Studies who served as the White House’s senior director of cybersecurity policy under both President Obama and President Trump. “Think of automated vehicles and how to apportion liability in crashes. It’s a big challenge, but it’s not the first time we’ve dealt with automated systems. We have liability regimes, though they may need to be updated.”

Grotto has direct experience with the subject: Under President Obama, he helped guide the development of the NIST Cybersecurity Framework. That exercise, he says, offers valuable lessons for dealing with AI risks.

Writing with Gregory Falco and Iliana Maifeld-Carucci of Johns Hopkins University, Grotto recently urged NIST to treat artificial intelligence issues as an extension of its existing risk guidance on information systems in general. Falco joined the Hopkins faculty this year after finishing a postdoctoral fellowship at FSI’s Program on Geopolitics, Technology, and Governance, which Grotto founded and leads.

“We recommend that NIST and the broader community of stakeholders … adopt a rebuttable presumption that AI risks are extensions of risks associated with non-AI technology, unless proven otherwise,” they wrote.

Pinpointing the Gaps

Grotto and his colleagues agree that artificial intelligence systems will indeed pose some new challenges. As a result, they say, the first priority should be to identify where the existing risk frameworks fall short.

In cybersecurity, for example, the standard process now is to identify and catalog hacking vulnerabilities and rate each on its severity. That enables software vendors and users to install patches. In artificial intelligence, by contrast, the weakness can be inherent in how a machine-learning model teaches itself to carry out particular tasks, such as screening facial images. If hackers figure out the basic model, they can trick it by feeding it slightly distorted data. In fact, one team of AI researchers recently argued that the vulnerabilities are often “features” rather than “bugs” in the software. Grotto and Jim Dempsey, who recently retired as executive director of the Center for Law and Technology at Berkeley Law School, explored vulnerability disclosure and management for AI/ML systems in a recent white paper.

In general, however, Grotto argues that AI risks are broadly similar to those of earlier technologies.

“There’s a great deal of concern about algorithmic bias that harms people because of their race, ethnicity, or gender,” he notes. “But that’s been a risk in credit score reporting and finance long before companies used AI.”

Grotto says there’s a growing consensus about the key principles that should underlie AI risk management. Those include the need to protect privacy, assure accountability, and promote the “explainability” of AI decisions.

But rather than make things unnecessarily complicated by creating a stand-alone framework for artificial intelligence, he says, it’s better to build and expand on the work that’s already been done.

“It’s not like AI systems will appear one day and all the other systems will disappear,” he says. “The AI systems will work alongside and be embedded in the existing systems. The goal should be to mainstream AI risk management, not put it in a separate silo.”

Grotto plans a multi-year research agenda on this theme, with future work examining how risk management tools and concepts such as software bills of material, zero trust architecture, security ratings, and conformity assessment could be applied and adapted to AI/ML systems.

This research is partially funded by the National Institute of Standards and Technology (NIST).

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Edmund L. Andrews

Related News

AI Challenges Core Assumptions in Education
Shana Lynch
Feb 19, 2026
News

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

News

AI Challenges Core Assumptions in Education

Shana Lynch
Education, SkillsGenerative AIPrivacy, Safety, SecurityFeb 19

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images
PBS NewsHour
Jan 16, 2026
Media Mention

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Media Mention
Your browser does not support the video tag.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images

PBS NewsHour
Privacy, Safety, SecurityRegulation, Policy, GovernanceEthics, Equity, InclusionJan 16

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.