Inside Trump’s Ambitious AI Action Plan

Pres. Donald Trump signs executive orders in the Oval Office. Official White House Photo by Abe McNatt
The White House favors market-driven growth and light governance, signaling a departure from the previous administration.
The White House’s newly released AI Action Plan outlines an assertive vision for U.S. leadership in AI. Framed around the imperative to “win the AI race,” the 28-page plan emphasizes the removal of regulatory barriers, the build-out of foundational AI infrastructure, and private sector momentum as the primary levers for accelerating innovation.
This vision reflects a notable shift from the Biden Administration’s now-rescinded Executive Order (EO) on AI, which emphasized whole-of-government coordination, safeguards against AI-enabled harms, and government capacity-building. By contrast, the Trump Administration’s action plan lays out a policy blueprint that leans into market-driven approaches — promoting open models, accelerating infrastructure development, and expanding global adoption of U.S. AI models. Three accompanying EOs further detail this vision by issuing directives on exporting U.S. AI technology, accelerating federal permitting for data centers, and preventing “woke AI” in the federal government.
The new approach also represents continuity on several priorities, including advancing the government’s ability to evaluate models, opening access to computing resources, and encouraging AI-enabled science. Yet across the board, the action plan does not at this stage provide concrete implementation plans: Many provisions lack clear timelines, designated agency responsibilities, and funding pathways. Here, we offer a preliminary analysis of the action plan’s implications for public sector AI innovation, AI governance, workforce and skill development, and implementation.
Implications for Academia and Public Sector AI
The action plan emphasizes open innovation, public AI research infrastructure, and AI-enabled scientific discovery as key pillars of U.S. leadership in AI. It reaffirms federal support for expanding access to computing and data resources through a renewed commitment to the National AI Research Resource (NAIRR) and calls for public-private partnerships with the research community. The Stanford Institute for Human-Centered AI (HAI) has long advocated for a NAIRR, a cause first championed by co-directors Fei-Fei Li and John Etchemendy in 2019.
The plan’s approach focuses heavily on infrastructure that will be built and operated by the private sector. While industry plays a critical role in scaling innovation, we believe long-term U.S. leadership in AI will depend on sustained investment in the public sector institutions — universities, nonprofit organizations, and independent labs — that form the backbone of the broader AI innovation ecosystem.
Notably, the plan offers the strongest federal endorsement to date of open-source and open-weight AI models. It directs the Department of Commerce to convene stakeholders to drive open model adoption among small and medium-sized businesses — a step that could also benefit researchers by lowering barriers to experimentation and broadening access to cutting-edge tools. This is a welcome step toward ensuring that innovation is not concentrated among a handful of actors and that the benefits of AI are more broadly distributed.
The plan also calls for advancing AI as a research discipline and supporting scientific breakthroughs across fields such as biology, chemistry, and materials science — primarily through privately developed, cloud-enabled labs. It designates scientific data as a strategic asset, proposing minimum data quality standards and requiring federally funded researchers to disclose non-proprietary, non-sensitive datasets used in their research. These proposals are promising, particularly when paired with efforts like the National Science Foundation’s initiative to enable secure public and agency access to federal data.
Shifting From Prescriptive Regulation to Light-Touch Governance
The action plan marks a shift away from prescriptive regulation. Referencing the rollback of the Biden AI EO, it directs agencies to consider “pro-innovation" state policies when allocating federal funding. Yet, it retains a role for federal governance mechanisms to manage risks, focusing on technical standards, model evaluations, and other agency-specific tools.
Key mechanisms include:
A national evaluation ecosystem led by NIST’s Center for AI Standards and Innovation (CAISI): Building on existing technical evaluation efforts, CAISI — formerly the U.S. AI Safety Institute — is tasked with developing inter-agency and sector-specific model evaluation tools and convening public and academic stakeholders to share best practices.
Regulatory sandboxes and AI Centers of Excellence: These proposed mechanisms, which would enable real-world AI testing, signal promising directions for sector-specific experimentation but need additional details on scope and implementation.
An AI Information Sharing and Analysis Center (AI-ISAC) led by the Department of Homeland Security: This structure for sharing AI-specific cybersecurity threat intelligence is a promising step toward creating adverse event reporting mechanisms in government.
Export controls and infrastructure-driven global influence: The plan aims to expand U.S. strategic influence in the global AI ecosystem by coordinating export restrictions with allies and expanding access to and dependencies on U.S.-developed AI infrastructure.
The plan’s emphasis on technical evaluations and other information gathering mechanisms reflects an important move toward evidence-based policymaking. However, it leaves key risks underaddressed. Notably absent is a clear federal commitment to protecting the public from AI-enabled harms such as fraud, discrimination, privacy violations, and child sexual exploitation.
A Commitment to American Workers
The action plan’s “worker-first” AI agenda stands out as another distinct feature of this administration’s AI vision — echoing themes from Vice President JD Vance’s speech at the Paris AI Action summit. Its focus on upskilling the country’s broad workforce, particularly through career and technical education as well as apprenticeships signals a serious acknowledgement of the impending labor market impacts of AI. Recognizing that effective retraining requires substantial evidence and accurate predictions about the future of labor demand, the Plan calls for the establishment of an AI Workforce Research Hub. The hub would consolidate and analyze data from statistical agencies — an important starting point for improving the evidence basis needed to guide smart, forward-looking workforce investments.
Potential Implementation Roadblocks Ahead
The action plan is wide-ranging in scope, assigning 103 policy actions to a broad array of federal agencies and entities, with the Department of Commerce (especially NIST and its CAISI) and the Department of Defense carrying some of the most extensive mandates. Many of the provisions are concrete with tangible outcomes: the call for requests for information and guidance documents on specific issue areas, the establishment of research hubs or centers of excellence, and multi-stakeholder convenings on specific policy issues.
Yet while the ambition is clear, the plan’s real-world impact remains uncertain amid scarce implementation details. Around a third of the recommended actions do not identify a lead agency. No implementation timelines for any of the tasks are provided, nor is there clarity around whether new funding or other resources will be allocated.
It is important to recognize that the plan is a policy roadmap, not an Executive Order that carries the force of law. Its actions are advisory rather than legally binding. However, without further details, potentially impactful proposals — such as regulatory sandboxes for AI testing, interagency guidelines for model evaluations, and AI interpretability and robustness programs — face an uncertain future.
Amid ongoing personnel reorganization and budget changes across federal agencies, questions remain about who will spearhead the meaningful implementation moving forward. While the plan recognizes roles like Chief AI Officers, realizing their full potential will require clearer mechanisms for interagency coordination and greater investment in building technical expertise across the federal workforce.
What’s Next?
The action plan marks a meaningful effort to chart a new course for AI innovation and governance in the United States. Beyond the themes highlighted above, the plan also proposes actions in several other areas — such as military applications of AI, workforce development, education and training initiatives, and specific agency mandates — that will also warrant further attention. In the near term, however, much of that activity is likely to center on the three aforementioned Executive Orders released alongside the plan.
As our prior work tracking and analyzing the implementation of AI-related executive actions released since 2019 has shown, successful implementation will depend on senior leadership, sustained resourcing, and effective interagency coordination. How this plan evolves into an actionable national strategy will hinge on the ways the Administration activates those elements in the months ahead.
Authors: Caroline Meinhardt is the policy research manager at Stanford HAI. Daniel Zhang is the senior manager for policy initiatives at Stanford HAI. Jennifer King is the privacy and data policy fellow at Stanford HAI. Andreas Haupt is a postdoctoral fellow at Stanford HAI and the Digital Economy Lab. Elena Cryst is the director of policy and society at Stanford HAI.

