Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Inside Trump’s Ambitious AI Action Plan | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Inside Trump’s Ambitious AI Action Plan

Date
July 24, 2025
Topics
Regulation, Policy, Governance
Donald Trump

Pres. Donald Trump signs executive orders in the Oval Office. Official White House Photo by Abe McNatt

The White House favors market-driven growth and light governance, signaling a departure from the previous administration.

The White House’s newly released AI Action Plan outlines an assertive vision for U.S. leadership in AI. Framed around the imperative to “win the AI race,” the 28-page plan emphasizes the removal of regulatory barriers, the build-out of foundational AI infrastructure, and private sector momentum as the primary levers for accelerating innovation. 

This vision reflects a notable shift from the Biden Administration’s now-rescinded Executive Order (EO) on AI, which emphasized whole-of-government coordination, safeguards against AI-enabled harms, and government capacity-building. By contrast, the Trump Administration’s action plan lays out a policy blueprint that leans into market-driven approaches — promoting open models, accelerating infrastructure development, and expanding global adoption of U.S. AI models. Three accompanying EOs further detail this vision by issuing directives on exporting U.S. AI technology, accelerating federal permitting for data centers, and preventing “woke AI” in the federal government.

The new approach also represents continuity on several priorities, including advancing the government’s ability to evaluate models, opening access to computing resources, and encouraging AI-enabled science. Yet across the board, the action plan does not at this stage provide concrete implementation plans: Many provisions lack clear timelines, designated agency responsibilities, and funding pathways. Here, we offer a preliminary analysis of the action plan’s implications for public sector AI innovation, AI governance, workforce and skill development, and implementation. 

Implications for Academia and Public Sector AI

The action plan emphasizes open innovation, public AI research infrastructure, and AI-enabled scientific discovery as key pillars of U.S. leadership in AI. It reaffirms federal support for expanding access to computing and data resources through a renewed commitment to the National AI Research Resource (NAIRR) and calls for public-private partnerships with the research community. The Stanford Institute for Human-Centered AI (HAI) has long advocated for a NAIRR, a cause first championed by co-directors Fei-Fei Li and John Etchemendy in 2019. 

The plan’s approach focuses heavily on infrastructure that will be built and operated by the private sector. While industry plays a critical role in scaling innovation, we believe long-term U.S. leadership in AI will depend on sustained investment in the public sector institutions — universities, nonprofit organizations, and independent labs — that form the backbone of the broader AI innovation ecosystem.

Notably, the plan offers the strongest federal endorsement to date of open-source and open-weight AI models. It directs the Department of Commerce to convene stakeholders to drive open model adoption among small and medium-sized businesses — a step that could also benefit researchers by lowering barriers to experimentation and broadening access to cutting-edge tools. This is a welcome step toward ensuring that innovation is not concentrated among a handful of actors and that the benefits of AI are more broadly distributed.

The plan also calls for advancing AI as a research discipline and supporting scientific breakthroughs across fields such as biology, chemistry, and materials science — primarily through privately developed, cloud-enabled labs. It designates scientific data as a strategic asset, proposing minimum data quality standards and requiring federally funded researchers to disclose non-proprietary, non-sensitive datasets used in their research. These proposals are promising, particularly when paired with efforts like the National Science Foundation’s initiative to enable secure public and agency access to federal data.

Shifting From Prescriptive Regulation to Light-Touch Governance

The action plan marks a shift away from prescriptive regulation. Referencing the rollback of the Biden AI EO, it directs agencies to consider “pro-innovation" state policies when allocating federal funding. Yet, it retains a role for federal governance mechanisms to manage risks, focusing on technical standards, model evaluations, and other agency-specific tools. 

Key mechanisms include:

  • A national evaluation ecosystem led by NIST’s Center for AI Standards and Innovation (CAISI): Building on existing technical evaluation efforts, CAISI — formerly the U.S. AI Safety Institute — is tasked with developing inter-agency and sector-specific model evaluation tools and convening public and academic stakeholders to share best practices.

  • Regulatory sandboxes and AI Centers of Excellence: These proposed mechanisms, which would enable real-world AI testing, signal promising directions for sector-specific experimentation but need additional details on scope and implementation.

  • An AI Information Sharing and Analysis Center (AI-ISAC) led by the Department of Homeland Security: This structure for sharing AI-specific cybersecurity threat intelligence is a promising step toward creating adverse event reporting mechanisms in government.

  • Export controls and infrastructure-driven global influence: The plan aims to expand U.S. strategic influence in the global AI ecosystem by coordinating export restrictions with allies and expanding access to and dependencies on U.S.-developed AI infrastructure. 

The plan’s emphasis on technical evaluations and other information gathering mechanisms reflects an important move toward evidence-based policymaking. However, it leaves key risks underaddressed. Notably absent is a clear federal commitment to protecting the public from AI-enabled harms such as fraud, discrimination, privacy violations, and child sexual exploitation. 

A Commitment to American Workers

The action plan’s “worker-first” AI agenda stands out as another distinct feature of this administration’s AI vision — echoing themes from Vice President JD Vance’s speech at the Paris AI Action summit. Its focus on upskilling the country’s broad workforce, particularly through career and technical education as well as apprenticeships signals a serious acknowledgement of the impending labor market impacts of AI. Recognizing that effective retraining requires substantial evidence and accurate predictions about the future of labor demand, the Plan calls for the establishment of an AI Workforce Research Hub. The hub would consolidate and analyze data from statistical agencies — an important starting point for improving the evidence basis needed to guide smart, forward-looking workforce investments.

Potential Implementation Roadblocks Ahead

The action plan is wide-ranging in scope, assigning 103 policy actions to a broad array of federal agencies and entities, with the Department of Commerce (especially NIST and its CAISI) and the Department of Defense carrying some of the most extensive mandates. Many of the provisions are concrete with tangible outcomes: the call for requests for information and guidance documents on specific issue areas, the establishment of research hubs or centers of excellence, and multi-stakeholder convenings on specific policy issues. 

Yet while the ambition is clear, the plan’s real-world impact remains uncertain amid scarce implementation details. Around a third of the recommended actions do not identify a lead agency. No implementation timelines for any of the tasks are provided, nor is there clarity around whether new funding or other resources will be allocated. 

It is important to recognize that the plan is a policy roadmap, not an Executive Order that carries the force of law. Its actions are advisory rather than legally binding. However, without further details, potentially impactful proposals — such as regulatory sandboxes for AI testing, interagency guidelines for model evaluations, and AI interpretability and robustness programs — face an uncertain future. 

Amid ongoing personnel reorganization and budget changes across federal agencies, questions remain about who will spearhead the meaningful implementation moving forward. While the plan recognizes roles like Chief AI Officers, realizing their full potential will require clearer mechanisms for interagency coordination and greater investment in building technical expertise across the federal workforce.

What’s Next?

The action plan marks a meaningful effort to chart a new course for AI innovation and governance in the United States. Beyond the themes highlighted above, the plan also proposes actions in several other areas — such as military applications of AI, workforce development, education and training initiatives, and specific agency mandates — that will also warrant further attention. In the near term, however, much of that activity is likely to center on the three aforementioned Executive Orders released alongside the plan.

As our prior work tracking and analyzing the implementation of AI-related executive actions released since 2019 has shown, successful implementation will depend on senior leadership, sustained resourcing, and effective interagency coordination. How this plan evolves into an actionable national strategy will hinge on the ways the Administration activates those elements in the months ahead.

Authors: Caroline Meinhardt is the policy research manager at Stanford HAI. Daniel Zhang is the senior manager for policy initiatives at Stanford HAI. Jennifer King is the privacy and data policy fellow at Stanford HAI. Andreas Haupt is a postdoctoral fellow at Stanford HAI and the Digital Economy Lab. Elena Cryst is the director of policy and society at Stanford HAI.

Pres. Donald Trump signs executive orders in the Oval Office. Official White House Photo by Abe McNatt

Share
Link copied to clipboard!
Authors
  • headshot
    Caroline Meinhardt
  • Daniel Zhang
    Daniel Zhang
  • Jennifer King
    Jennifer King
  • Andy Haupt
    Andy Haupt
  • headshot
    Elena Cryst
Related
  • Response to OSTP’s Request for Information on the Development of an AI Action Plan
    Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
    Mar 17
    response to request

    Stanford scholars respond to a federal RFI on the development of an AI Action Plan, urging policymakers to promote open and scientific innovation, craft evidence-based AI policy, and empower government leaders.

  • Expanding Academia’s Role in Public Sector AI
    Kevin Klyman, Aaron Bao, Caroline Meinhardt, Daniel Zhang, Elena Cryst, Russell Wald
    Quick ReadDec 04
    issue brief
    Expanding Academias role in public sector ai

    This brief analyzes the disparity between academia and industry in frontier AI research and presents policy recommendations for ensuring a stronger role for academia in public sector AI.

  • Decoding the White House AI Executive Order’s Achievements
    Rishi Bommasani, Christie M. Lawrence, Lindsey A. Gailmard, Caroline Meinhardt, Daniel Zhang, Peter Henderson, Russell Wald, Daniel E. Ho
    Nov 02
    news

    America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

  • AI Action Summit in Paris Highlights A Shifting Policy Landscape
    Shana Lynch
    Feb 27
    news

    Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.

Related News

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images
PBS NewsHour
Jan 16, 2026
Media Mention

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Media Mention
Your browser does not support the video tag.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images

PBS NewsHour
Privacy, Safety, SecurityRegulation, Policy, GovernanceEthics, Equity, InclusionJan 16

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Translating Centralized AI Principles Into Localized Practice
Dylan Walsh
Jan 13, 2026
News
Pedestrians walk by a Louis Vuitton store

Scholars develop a framework in collaboration with luxury goods multinational LVMH that lays out how large companies can flexibly deploy principles on the responsible use of AI across business units worldwide.

News
Pedestrians walk by a Louis Vuitton store

Translating Centralized AI Principles Into Localized Practice

Dylan Walsh
Ethics, Equity, InclusionRegulation, Policy, GovernanceJan 13

Scholars develop a framework in collaboration with luxury goods multinational LVMH that lays out how large companies can flexibly deploy principles on the responsible use of AI across business units worldwide.

There’s One Easy Solution To The A.I. Porn Problem
The New York Times
Jan 12, 2026
Media Mention

Riana Pfefferkorn, Policy Fellow at HAI, urges immediate Congressional hearings to scope a legal safe harbor for AI-generated child sexual abuse materials following a recent scandal with Grok's newest generative image features.

Media Mention
Your browser does not support the video tag.

There’s One Easy Solution To The A.I. Porn Problem

The New York Times
Regulation, Policy, GovernanceGenerative AIJan 12

Riana Pfefferkorn, Policy Fellow at HAI, urges immediate Congressional hearings to scope a legal safe harbor for AI-generated child sexual abuse materials following a recent scandal with Grok's newest generative image features.