Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Stanford HAI Faculty Urge President Biden to Approach AI with a Moonshot Mentality | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Stanford HAI Faculty Urge President Biden to Approach AI with a Moonshot Mentality

Date
June 30, 2023
Adam Schultz, Official White House photo

At a recent meeting with the president, HAI leaders urged U.S. investment and leadership to unlock AI's potential.

Recently, we had the opportunity to join an esteemed panel of AI leaders who sat down with President Biden for an important discussion surrounding the future of artificial intelligence. 

We told the president now is America’s moonshot moment for AI. It is time for the government to invest in AI like it has with NASA, and we must urgently adopt a mindset toward making sure America leads in order to truly unlock AI’s vast potential.

Harnessing AI will be one of the defining tasks of the 21st century. The technology has the power to achieve once unfathomable feats like curing cancer. Reaching that potential will require deep, hands-on investment and collaboration to catalyze advancement, balanced by responsible stewardship of AI’s integration into society.  

In order to cement America’s leadership, we need to shift the current dynamics of our AI ecosystem — one in which big industry dominates, leaving government and academia to be supporting players. To correct that imbalance, we must ensure more scientists have access to conduct AI research. Initiatives like the National AI Research Resource, for instance, would grant all of academia access to the compute power needed to conduct critical research. Right now, when you measure the U.S. and UK’s proposals for a public sector research cloud, the UK’s is five times greater. That is simply unacceptable given the speed at which this technology is developing. 

Expanding access to compute power would also alleviate the government’s significant staffing challenge. Today, less than 1% of AI PhDs go on to work in government; most enter industry (65%), largely because of the access to resources and higher salaries, according to this year’s AI Index. In fact, last year there were 32 significant breakthroughs in AI from industry – there were only three from academia and none from government. 

Like with any technology, there are and will continue to be bad actors who intentionally exploit the power of AI. The rise of deep fakes and manipulation of reality pose a very real near-term threat, and careful regulation is crucial. But stifling this innovation not only risks handing the reins to another world power without America’s resources and moral compass, but also might very well cost us the chance of achieving extraordinary, life-saving breakthroughs during our lifetime. 

The past few months have underscored the importance of our founding mission of advancing AI to better human conditions. Lack of investment in public scientific AI research could hamstring this vision and have serious long-term implications for our democracy. We urge the Biden Administration to take swift action and look forward to a continued partnership between the worlds of tech and policy to ensure that AI ends up on the right side of history, and humanity.

Fei-Fei Li is a co-director of the Stanford Human-Centered AI Institute and the Sequoia Professor in the Computer Science Department at Stanford University. Rob Reich is an associate director of Stanford HAI and the McGregor-Girand Professor of Social Ethics of Science and Technology in the Stanford School of Humanities and Sciences. 

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.  

Adam Schultz, Official White House photo
Share
Link copied to clipboard!
Contributor(s)
Fei-Fei Li and Rob Reich

Related News

To Practice PTSD Treatment, Therapists Are Using AI Patients
Sarah Wells
Nov 10, 2025
News
Doctor works on computer in the middle of a therapy session

Stanford's TherapyTrainer deploys AI to help therapists practice skills for written exposure therapy.

News
Doctor works on computer in the middle of a therapy session

To Practice PTSD Treatment, Therapists Are Using AI Patients

Sarah Wells
HealthcareNov 10

Stanford's TherapyTrainer deploys AI to help therapists practice skills for written exposure therapy.

Fei-Fei Li Wins Queen Elizabeth Prize for Engineering
Shana Lynch
Nov 07, 2025
News

The Stanford HAI co-founder is recognized for breakthroughs that propelled computer vision and deep learning, and for championing human-centered AI and industry innovation.

News

Fei-Fei Li Wins Queen Elizabeth Prize for Engineering

Shana Lynch
Computer VisionMachine LearningNov 07

The Stanford HAI co-founder is recognized for breakthroughs that propelled computer vision and deep learning, and for championing human-centered AI and industry innovation.

Our Racist, Terrifying Deepfake Future Is Here
Nature
Nov 03, 2025
Media Mention

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Media Mention
Your browser does not support the video tag.

Our Racist, Terrifying Deepfake Future Is Here

Nature
Generative AIRegulation, Policy, GovernanceLaw Enforcement and JusticeNov 03

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.