Skip to main content Skip to secondary navigation
Page Content

Tracking U.S. Executive Action on AI

As the U.S. federal government continues to move toward greater adoption and governance of AI, there is a vital need for transparent and effective leadership and implementation of related executive actions. Stanford HAI scholars have analyzed the state of implementation of a variety of AI-related executive actions released since 2019 across administrations, providing insights into federal efforts to lead in and govern AI.

Our research aims to enhance public accountability and transparency and highlights both accomplishments and challenges in implementation. Through detailed tracking and analysis, we have observed improved agency compliance and reporting on AI-related requirements. However, persistent challenges underscore the need for sustained senior-level leadership to ensure a “whole-of-government” approach to responsible AI innovation and governance.


 

White Paper (January 2025):
Assessing the Implementation of Federal AI Leadership and Compliance Mandates 

Relevant executive actions: EO 14110 (2023) and OMB Memo M-24-10 (2024)

Assessing the Implementation of Federal AI Leadership and Compliance MandatesIn this white paper, scholars from Stanford HAI, the Stanford RegLab, and the Administrative Conference of the United States assess the implementation of a few key requirements included in two executive actions taken by the Biden administration: Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of AI and the corresponding Office of Management and Budget Memorandum M-24-10 on Advancing Governance, Innovation, and Risk Management for Agency Use of AI (M-Memo), both of which have since been rescinded.

This review of agencies’ implementation of mandates to appoint Chief AI Officers and issue plans for complying with the M-Memo, as well as their budgetary allocations to support AI-related initiatives, shows that White House leadership and agencies have taken significant steps toward organizing and elevating AI leadership. The white paper also points to areas in need of improvement. A “whole-of-government” approach to AI innovation continues to require senior-level leadership that shepherds consistent compliance across distinct government agencies.

Read the full white paper here

 


 

AI EO Tracker (2023-2024):
Analyzing the Safe, Secure, and Trustworthy AI EO 

Relevant executive actions: EO 14110 (2023)

Access our public AI EO Tracker (Google Sheet)          Download AI EO Tracker (Excel)

To follow and assess the federal government’s actions on AI under the now-rescinded Biden administration’s Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO), scholars from Stanford HAI, the Stanford RegLab, and Stanford CRFM published and periodically updated the Safe, Secure, and Trustworthy AI EO Tracker (“AI EO Tracker”). This line-level tracker lists the 150 distinct requirements that agencies and other federal entities had to implement under the AI EO, and provides our assessment of their implementation status based on publicly available information (see a detailed explanation of our methodology here).

Our accompanying analyses find that the federal government made significant and swift progress in implementing the AI EO, demonstrating admirable improvements in transparent public reporting. However, White House and agency reporting on implementation varied greatly in terms of the level of detail and accessibility, highlighting that there remains room for the government to provide more detailed and structured information.

Day 1 Update
Our analysis of the EO and review of its 150 distinct requirements highlights an ambitious scope and whole-of-government approach. It also signals a pragmatic consideration of which policy issue areas require strict deadlines to achieve urgent outcomes.
 

white house in winderDay 90 Update
Proactive White House reporting is a significant improvement on previous implementation of AI EOs. Despite swift progress, we could only confirm implementation of 11 (52%) of the 21 requirements due within 90 days and claimed as completed by the White House.
 

white houseDay 180 Update
The federal government continues to make progress, but independent verification of implementation remains tricky. We could only confirm implementation of 49 (71%) of the 69 requirements due within 180 days and claimed as completed by the White House.


 

White Paper (December 2022): 
Implementation Challenges to Three Pillars of America’s AI Strategy 

Relevant executive actions: AI In Government Act (2020), EO 13859 (2019), EO 13960 (2020)

Implementation Challenges to Three Pillars of America's AI StrategyThis white paper, published in collaboration with the Stanford RegLab, assesses the progress of three pillars of U.S. leadership in AI innovation and trustworthy AI under the first Trump administration that carry the force of law: (i) the AI in Government Act of 2020; (ii) Executive Order 13859 on “AI Leadership”; and (iii) Executive Order 13960 on “AI in Government.” Collectively, these EOs and the AI in Government Act have been critical to defining the U.S. national strategy on AI and envisioning an ecosystem where the U.S. government leads in AI and promotes trustworthy AI.

We systematically examined the implementation status of each requirement and performed a comprehensive search across 200-plus federal agencies to assess implementation of key requirements to identify regulatory authorities pertaining to AI and to enumerate AI use cases. While much progress has been made, our findings are sobering. America’s AI innovation ecosystem is threatened by weak and inconsistent implementation of these legal requirements. Difficulties in verifying implementation strongly suggest that improvements must be made on reporting and tracking of requirements deemed necessary for public disclosure.
 

Read the full white paper here