Skip to main content Skip to secondary navigation
Page Content
Image
Photo of the floor of Congress with several lawmakers in attendance.

Reuters

U.S. policymakers have indicated a growing interest in artificial intelligence.

On Jan. 1, Congress passed a sprawling defense authorization bill that includes a raft of provisions aimed at grappling with both the opportunities and risks of AI. Among other things, the law creates a new White House office to coordinate AI research across government agencies. It also directs a high-level interagency task force to develop a strategic plan on everything from research priorities to ethical and environmental issues.

The new law includes a provision that Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) had a role in shaping: to map out a national research cloud (NRC) that would give researchers around the country access to world-class computing power and data. The goal is to open up AI to a bigger and more diverse array of academic researchers.

To find out what the near-future of AI policy looks like, we talked to Michael Sellitto, deputy director of the Stanford Institute for Human-Centered AI, who served (2015-2018) as director of cybersecurity policy at the White House National Security Council, and Andrew Grotto, the William J. Perry International Security Fellow at the Stanford Cyber Policy Center who, before joining Stanford, served as senior director for cybersecurity policy at the White House (2015-2017).

How important is this new legislation on artificial intelligence, and what does it say about Washington’s interest in the field?

Sellitto: It’s the most substantial package of AI provisions that we’ve seen come out of Congress, and it has implications across the government. On the civilian side, the new AI Initiative Office at the White House will have a role looking across the entire government enterprise, from developing education and workforce policies to coordinating research across agencies and to addressing unintended effects of AI, such as bias. It also shows that Congress is thinking about AI in terms of America’s national competitiveness and staying ahead of rest of the world.

Grotto: The cynic in me wants to say that Washington has figured out how to spell AI. The optimist in me sees this as a rare level of initiative by Congress to wrap its head around an emerging technology. Now, talk is cheap. What really matters is how effectively the Biden administration carries it all out.

Which provisions are likely be most important?

Sellitto: The White House AI Initiative Office, which will have wide-ranging responsibilities for the nation’s overall approach to AI, has the potential to have a catalytic impact because it’s located at the seat of power in the executive branch. It depends, however, on which problems it tries to solve and how effective it is at building allies within the government.

One important initiative, and HAI can take some credit for the idea, is to explore the creation of a national research cloud — an AI computing infrastructure that would be accessible to a wide diversity of researchers. There’s no funding yet, but the legislation calls for creating a task force of leaders from government, academia, and industry to outline a plan for how it could work and how it would be funded.

Why is the NRC important?

Grotto: If I can offer an analogy, big expensive telescopes are to astronomy what computing power and data are to AI research. In the same way that government and academia came together to build this network of telescopes around the world, which have allowed major advances in science, a national research cloud could greatly open up AI research.

This field is not the most diverse or inclusive. The more accessible we can make the tools of AI, the better of the field will be.

Sellitto: Traditionally, U.S. strength in technology innovation has been the result of unique partnerships between academia, government, and industry. The government would provide seed funding to universities for research — and sometimes expensive tools, such as supercomputers or Stanford’s linear accelerator. Academic researchers would then come up with innovations, and industry did a wonderful job of commercializing them.

With AI, however, that balance between government, academia, and industry has been thrown off. You need access to very expensive computational resources, and those are mostly in the hands of industry. It’s very expensive to buy on the commercial cloud the kind of computing power that advanced AI requires.

While elite universities can probably pull together the money required to purchase computers for ambitious research projects, increasingly, these resources are out of reach for a large swath of colleges and universities across the country. This is about expanding the diversity of AI research in every sense of the word.

The other thing you need is access to data. Researchers go to where the data is, which right now means going to industry — imagine the stores of user data at companies like Google or Facebook. But much of that data isn’t of very high societal value. Meanwhile, however, the government sits on really high-value datasets. Think about the weather data for predicting severe storms or crop yields. Or about the vast amount of government data on health care. Between Medicare, Medicaid, and the Veterans Administration, the federal government is the nation’s biggest health care provider. Right now, that data is either very hard or impossible to acquire. If you could get access to it at some affordable price, while protecting privacy, there’s a lot of promise for research that benefits the public as a whole.

What other aspects of the new law are likely to be important?

Grotto: One very important provision directs the National Institute for Standards and Technology, or NIST, to establish risk frameworks for AI. Organizations of all shapes and sizes are implementing AI systems, but AI presents risks as well as opportunities. How should a manufacturer of automated vehicles think about the risks of accidents? How should hospitals think about the role of the doctor in making decisions based on the diagnostic advice from an AI model? NIST is tasked with developing management frameworks for how organizations can optimize those benefits and risks.

NIST has a track record of success. A good example is the NIST Cybersecurity Framework. President Obama directed NIST to develop a management framework for owners and operators of critical infrastructure. At the C-suite level, executives knew their systems had to be resilient against cyber attacks, but how should they go about that? These frameworks provide the tools for answering that question — both a strategic element, so executives get their heads around the risk, and specific actions that managers can implement. NIST runs an open and transparent process where anybody can participate — business, civil society, academia. The result was a cybersecurity framework that most Fortune 500 companies have implemented and that other nations have endorsed it as well.

NIST is funding research that I am leading on governance of trustworthy AI, with collaborators from the Applied Physics Lab at Johns Hopkins University. A core theme of this research is that AI systems are a species of IT and will often operate alongside or as part of legacy IT. There’s already an extensive catalog of laws, standards, guidelines, and IT risk management strategies. Our research examines the prospects for developing risk frameworks for AI built on that existing foundation. Where are existing IT risk principles applicable? How might they be adapted to managing AI risks, and where are there gaps in this legacy literature that require AI-specific additions or modifications?

Silicon Valley companies generally fight regulation, but AI does pose public policy concerns. How do you deal with that?

Sellitto: This kind of risk management framework can head off some of that concern. A lot of that worry is about ill-crafted regulations that are too prescriptive and don’t provide enough flexibility to account for innovation. If you can be more proactive about identifying risks and agree on solutions at the outset, you can avoid some of those problems. It’s worth noting that regulations tend to favor the big incumbent companies, historically, and we need to be mindful of such unintended effects.

Grotto: Small companies can have a huge impact on people’s lives. That creates a puzzle. On the one hand, we don’t want well-intentioned safeguards that lock in a certain set of big incumbent companies, because those are the only ones that can afford all the compliance. But you don’t want to ignore small companies that could create big problems. In finance, we have much stricter regulations for the very biggest banks, the so-called “systemically important financial institutions.” But that doesn’t make sense in the AI space, because small players can have systemically important impacts. These are hard questions that we can’t address until we have a way of talking about AI risks through an open and transparent process.

What in your own minds are the most important AI policy priorities?

Sellitto: It’s important for academics and policymakers to get really specific. There has been a proliferation of AI principles and guidelines from nations around the world — by some counts more than 200 of them. But we need to get from high-level principles to what they actually mean in practice. Everybody agrees, for example, that AI should be “explainable.” But what does that mean in medical devices, or hiring, or self-driving cars? What meets societies’ expectations for explainability will vary dramatically from one application area to the next. How do we implement these principles and what do we actually value? What is the public good? These are the tough questions we need to answer to get from principles to practice.

Interested in learning more about AI policy? Sign up for the Stanford HAI policy email.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content

HAI Logo extra dark plum background

Announcing the HAI Policy Brief Series

by John Etchemendy and Fei-Fei Li
September 24th, 2020

Dear HAI Community,  We started the Stanford Institute for Human-Centered Artificial Intelligence (HAI) because...