Skip to main content Skip to secondary navigation
Page Content

HAI Policy Boot Camp: 5 Insights for Lawmakers

At HAI’s inaugural congressional conference, experts unpack what AI means for national security, the economy, healthcare, education, and more.

Image
Freeman Spogli Institute for International Studies Senior Fellow Amy Zegart, center, laughs at a comment made by another panelist.

From left: Harold Trinkunas, deputy director of the Stanford Center for International Security and Cooperation; Amy Zegart, Morris Arnold and Nona Jean Cox Senior Fellow at the Hoover Institution; and Brad Boyd, Hoover Institution visiting fellow. | Christine Baker

The AI industry changes so quickly it’s challenging for researchers to keep up. That doesn’t bode well for Congress, responsible for limiting this technology’s harms while encouraging its wide-ranging benefits. To regulate successfully, policymakers must understand what these technologies are capable of and how they actually work.

That was the core theme to Stanford HAI’s Congressional Boot Camp on AI, a three-day conference that brought together 25 bipartisan U.S. congressional staff members from both chambers to discuss the latest advances in AI. The conference’s sessions covered the economy and the workforce, data privacy, deepfakes and misinformation, climate sustainability, healthcare, foundation models, competition from China, and more.

“We recognize that sometimes for people not working directly with AI, it’s hard to tease apart the hype or gloom from what’s really going on,” said HAI Co-Director Fei-Fei Li during one session focused on the technology’s latest developments. “We want to provide a forum to discuss the opportunities of AI in a thoughtful and constructive way.”

The event, which took place Aug. 8-10 on the Stanford campus, is the first Stanford congressional boot camp focused solely on AI, and the largest delegation that has come to campus.

“One of HAI’s goals is to help policymakers understand the landscape around this fast-growing technology – both its harms and its potential – to effectively regulate this space,” said Russell Wald, HAI policy director. “Good policy should protect society while not stifling innovation.”

Congressional staffers at a table

Over the course of three days, the staffers heard from experts including Stanford Digital Economy Lab Director Erik Brynjolfsson, Freeman Spogli Institute for International Studies Senior Fellow Amy Zegart, Stanford President Emeritus John Hennessy, Hoover Institution Director Condoleezza Rice, Stanford Graduate School of Education Dean Dan Schwartz, and others. Here are a few insights from the boot camp:

AI and the Workforce

Despite more companies implementing AI and automation, economic productivity in the U.S. has actually slowed the past decade. Brynjolfsson sees two potential reasons for this productivity paradox. First, we don’t measure the digital economy very well. Although people log hours using social media or digital tools like email and video streaming, these products aren’t captured in the GDP. 

Erik Brynjolfsson

Second, companies are experiencing implementation and restructuring lags. “For every one dollar spent on technology, there are another nine to 10 dollars needed in organizational design and worker training,” Brynjolfsson said. “That’s the hidden part of the iceberg most of us don’t see.” 

Still, Brynjolfsson expects AI to change nearly every occupation. But rather than fully replacing jobs, AI could augment workers. In a study analyzing occupational tasks for hundreds of jobs, “we didn’t find one where machine learning could run the table and do all the tasks.” Consider radiology: AI can interpret images, but it can’t work with people to weigh treatment options or administer sedation during a procedure. “Machines won’t put radiologists out of work, but radiologists who don’t know how to use machines will be put out of business by radiologists who do,” he said.

Foundation Models: Competition Breeds Secrecy

Among Big Tech companies (Google, Meta, OpenAI, Microsoft, etc.), we’re seeing a race for giant AI models, said Percy Liang, director of the Stanford HAI Center for Research on Foundation Models (CRFM). This competition has a chilling effect on transparency. While AI used to be a very open science, where scientists shared data and technical advances, today the most capable models are held in private control, datasets aren’t released, and “there’s a general sense that companies are more guarded.” 

This intense competition is going to change the nature of how AI develops. Organizations like CRFM need to be able to study how these models develop and how they are implemented, and they can encourage industry standards. “Think about it like infrastructure,” he said. “If all the roads were privately controlled, that’s not a good way to build infrastructure for society.” 

AI and Data

AI relies on data, but good AI relies on quality data. Yet we have little data accountability in AI: Companies scrape data from the internet, use that data without consent, reuse it infinitely and out of context, and offer no recompense for data providers, said Jennifer King, HAI Privacy and Data Policy Fellow.

To improve data governance, consider society-wide restrictions on data collection and processing, she suggested. Governance models should benefit groups, not just individuals, through tools such as data trusts or loyalty duties. Another approach: Treat data as a taxable asset and use dividends to fund infrastructure for public data development.

Additionally, companies and regulators should consider the entire lifecycle of data, from collection to sunsetting its use, and may also consider implementing something like model cards or nutrition labels for AI. 

AI and Education

Dan Schwartz, dean of the Stanford Graduate School of Education, highlighted six potential uses of AI in the classroom: 

  • Precise recommendation engines (ex: AI tutors that help teachers determine when students need help in assignments) 
  • Tracking systems (ex: help detect kids at risk of dropping out)
  • Sensory data tools (ex: identify the emotions of students in class based on video)
  • Assistive technologies (ex: assist students with speech disabilities)
  • Intelligent social orchestration (ex: on-demand tutor matching)
  • Tools that turn students into producers (ex: games in which students teach an AI agent and watch it perform based on how well they taught it).

But classroom AI raises thorny questions: How do schools balance tailored education with surveillance concerns? Will access to tools be limited to higher income districts? Who is developing the content, and is it accurate and representative? Will this sideline the important role of teachers?

To address some of these issues, the federal government must drive universal broadband and equitable access to AI, create data standards to support collating information, develop policies that balance privacy and the need for big data, and support research and development to address strong learner differences, Schwartz said.

Public-Sector AI

In a report from 2020, Stanford Law Professor Daniel E. Ho, an HAI associate director and member of the National Artificial Intelligence Advisory Committee, enumerated the many potential uses for AI in the public sector. But according to the latest AI Index, less than 2% of AI PhDs head to government roles.

School of Law professor Dan Ho

Public-sector salaries, often lower than the private sector, are only part of the reason. Other challenges: lack of data infrastructure, computing, and top-level support to engage talent with meaningful problems. He noted a machine learning PhD student who, frustrated in part by lack of access to a single GPU, abandoned ambitions to go into government. 

Partnerships are key, Ho said. For example, after World War II, the Veterans Administration laid the foundation to partner with academic medical hospitals to serve some 10 million soldiers returning home. Not only was the VA able to scale to meet the needs of so many patients, but the partnership created an ecosystem that catalyzed improvement in quality of care and innovation. 

Yet getting partnerships right is key. U.S. Customs and Border Protection discarded an iris scanning tool after a contractor refused to explain flaws in the technology, claiming proprietary information.

“At Stanford, we’re trying to build these kinds of bridges, these academic/agency partnerships, so that knowledge transfer can happen much more rapidly,” Ho said. “Given the pace of innovation, by the time some agencies have gotten someone through the federal onboarding process, there’s already been a sea change in artificial intelligence.”

The Stanford HAI Congressional Boot Camp on Artificial Intelligence is an annual program for people who play a key role in shaping and developing technology policy. Learn more about the program.

To learn more about HAI policy resources and events, sign up for our Policy Newsletter. Learn about upcoming events open to the public here.

More News Topics