Skip to main content Skip to secondary navigation
Page Content

Legislation Will Bolster Public Sector AI Research

Stanford AI leaders gathered last week to discuss the National AI Research Resource, which gives academic and nonprofit researchers access to the computing power and government datasets needed to advance AI.

Fei-Fei Li speaks on a panel discussion at a recent HAI event.

Don Feria

Stanford HAI Co-Director Fei-Fei Li speaks to HAI Associate Director Russ Altman during a recent event discussing public sector AI. 

As the world navigates a complicated artificial intelligence revolution, the U.S. is strengthening its AI research resources, establishing new standards for AI safety and security, and promoting innovation and competition. This translates to critically needed computing and data resources for academia, whose key role in AI scientific discovery and risk mitigation complements the private sector’s work, said AI leaders during an event held by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) on Tuesday, Oct. 31.

“We know that AI can provide tremendous benefits to society,” said U.S. Rep. Anna Eshoo. “It can improve drug discovery and medical treatments, it can assist with disaster response and help us respond to climate change. … It holds great promise, but it also presents peril.”

Eshoo, co-chair of the Congressional AI Caucus and co-sponsor for the Creating Resources for Every American to Experiment with Artificial Intelligence (CREATE AI) Act, gave the keynote address at the HAI event, titled “Unlocking Public Sector AI Innovation: Next Steps for the National AI Research Resource.”

The event took place one day after President Biden signed an executive order designed to establish new standards for AI safety and security, protect privacy, advance equity and civil rights, promote innovation, and more.

Anna Eshoo speaking before a crowd

U.S. Rep. Anna Eshoo, co-chair of the Congressional AI Caucus and co-sponsor for the CREATE AI Act, delivered the keynote address at the “Unlocking Public Sector AI Innovation” seminar. | Don Feria

Federal Support

The risks posed by AI include exacerbating existing inequities in society, fueling misinformation that damages democracy, and creating new weapons or pathogens, Eshoo explained.

“We need to develop AI that is safe, that is trustworthy and responsible,” Eshoo said. “We can remove the elements of fear and what some may find menacing in this to celebrate the promise of AI to improve people’s lives. … That’s why this work is so important. I am so indebted to the leadership of [HAI] for their early work and vision in this. Stanford is the leader of this in the country.”

However, the lack of computing and data resources threatens the types of breakthroughs – such as the internet – that academia is known for, said John Etchemendy, the Patrick Suppes Family Professor in the School of Humanities and Sciences, the Denning Co-Director of Stanford HAI, and provost emeritus. “This leaves the frontiers of AI exclusively in the hands of the most resourced – primarily industry – and risks a brain drain from the academy.”

To help counter this, Congress introduced the bipartisan, bicameral CREATE AI Act in July, which establishes the National AI Research Resource (NAIRR).

For years, HAI has been a proponent of a resource like NAIRR, which would enable the government to provide much-needed access to large-scale computation and government datasets to academics, nonprofit researches, and startups across the U.S. HAI Co-Director Fei-Fei Li has been working with policymakers locally and nationally to ensure the responsible use of AI technologies. She has participated in a number of U.S. Senate and congressional testimonies on the topic and is a member of the NAIRR Task Force.

As part of the new White House executive order, NAIRR will begin a pilot program within 90 days, and the next step is for Congress to pass the CREATE AI Act, fully establishing NAIRR beyond a pilot program, Etchemendy explained.

Computing and Data Resource Restraints

Li said she is not aware of any university in the country, or possibly in the world, that has more than 1,000 graphics processing units (GPUs). Yet it takes tens of millions of dollars and tens of thousands of GPUs to train ChatGPT, the AI-powered language model developed by OpenAI.

“So by that estimate, not a single university can train a ChatGPT model,” Li said in a panel discussion with Russ B. Altman, associate director of HAI.

Li gave other examples of resource constraints faced by academic AI researchers. Li has a team of researchers, PhD students, postdocs, and junior faculty working on a pilot project that uses smart sensors in the form of privacy-protected cameras within intensive care units at one of Stanford’s hospitals. The extra “set of eyes” helps improve health care delivery, but they require a tremendous amount of data, Li explained. A few months ago, members of this project came to Li and said they needed to delete data because they didn’t have enough storage.

“This is the first time the research world has ever had continuous video of live, real clinical setting data on these complex algorithms, but we don’t have enough storage,” Li said. “We do not have computers to process this data.”

“We see incredible researchers having to leave academia to do the kind of research and translational work that they loved doing here.”

—Fei-Fei Li

Co-Director of Stanford HAI


The researchers were able to pivot by deleting some stretches of video during which patients left the room, but it is a prime example of how research breakthroughs are being restrained by limited compute and data resources. Altman said that he and colleagues across the nation have had similar experiences.

To be most effective, academia needs to work upstream from the private sector, which has its own timelines and quarterly milestones that challenge its ability to take the long view on unsolved problems, such as understanding the basic biology of a disease, Altman said.

“When drugs fail, it’s because there was too much of a narrow understanding of the underlying biology,” Altman added. “Industry can’t make that kind of long-term investment because it’s too broad, too wide, and they really depend on the academy to help them lay that groundwork so that they can develop drugs.”

Academia excels at this kind of curiosity-driven, foundational research, Li said.  Students at Stanford are developing robots that can perform basic household tasks such as folding laundry or cleaning toilets, for example, but those same robots, with enough time, could one day evolve to become disaster relief robots that save people from earthquakes.

Retaining Talent

There is a severe brain drain from academia to industry, said Jennifer King, who moderated the panel and is a privacy and data policy fellow at HAI, as less than 40% of new PhDs in AI go into academia and only 1% go into government, according to HAI’s AI Index.

Particularly in the U.S. and Silicon Valley, talent exchange between the public and private sectors is healthy and the “secret sauce of why we’re so successful,” Li said.

Yet, “what we see now is that the draining is very asymmetrical,” Li said. “We see incredible researchers having to leave academia to do the kind of research and translational work that they loved doing here.”

The higher compensation, perks, and greater resources offered by private sector companies mean there is less talent in the public sector for scientific discovery and training the next generation, Li said.

However, with the NAIRR, Li said the public sector can start to reverse the brain drain by providing researchers with resources such as access to a national government data repository. With data on areas such as firefighting, climate, health care, and more, “this is where we not only will have a resource for people to do research, we’re going to turn the tide of attracting talents” of those who want to conduct scientific discovery for the public good.

Fei-Fei Li is the Sequoia Capital Professor and a professor of computer science in the School of Engineering and the Denning Co-Director of Stanford HAI. She is also a professor, by courtesy, of operations, information, and technology at Stanford Graduate School of Business.

Russ B. Altman is the Kenneth Fong Professor in the School of Engineering and professor of bioengineering, of genetics, of medicine (biomedical informatics research), of biomedical data science, and, by courtesy, of computer science.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics