Skip to main content Skip to secondary navigation
Page Content

Transparency of AI EO Implementation: An Assessment 90 Days In

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

Image
White House in winter

Official White House Photo by Carlos Fyfe

January 28, 2024 was a milestone in U.S. efforts to harness and govern AI. It marked 90 days since President Biden signed Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“the AI EO”). The government is moving swiftly on implementation with a level of transparency not seen for prior AI-related EOs. A White House Fact Sheet documents actions made to date, a welcome departure from prior opacity on implementation. The scope and breadth of the federal government’s progress on the AI EO is a success for the Biden administration and the United States, and it also demonstrates the importance of our research on the need for leadership, clarity, and accountability.

Today, we announce the first update of our Safe, Secure, and Trustworthy AI EO Tracker —a detailed, line item-level tracker created to follow the federal government’s implementation of the AI EO’s 150 requirements

View the Safe, Secure, and Trustworth AI EO Tracker 

 

The White House boasts 21 requirements in the first stage of the AI EO were “completed,” suggesting a 100 percent success rate. We were only able to conclusively verify 19 requirements as having been fully implemented or shown substantive progress, suggesting a 90 percent success rate for first-stage "completed" requirements of the AI EO that are truly transparent to the public. 

As we explained in our initial analyses, the EO is a bold and urgent effort to rally AI action across the federal government. However, its success hinges on timely and transparent implementation by federal entities. Our review of publicly available information found that there are still noticeable gaps in the public disclosure of some AI EO requirements. Although the sensitivity of some requirements warrant less disclosure (e.g., disclosures raising cybersecurity or national security concerns), public verifiability remains an issue with other requirements.

How We Track Implementation

Our updated tracker provides information on the implementation status of 39 requirements based on 1) the White House’s Fact Sheet; 2) announcements, or public statements, made by the responsible federal entities or officials; and 3) official documents, media reports, and other conclusive evidence regarding specific line items publicly available as of February 8, 2024. 

Requirements are marked “implemented” if there is sufficient public evidence of full implementation (e.g., official announcements, independent reporting) separate from the Fact Sheet. For example, section 5.2(a)(i)’s requirement that the National Science Foundation (NSF) launch a pilot of the National AI Research Resource (NAIRR) is verified not just by the Fact Sheet, but also through the launch of the NAIRR pilot website and major announcements by NSF, agency partners, and non-government partners. We marked requirements as “not verifiably implemented” if we could not find conclusive evidence of full implementation, such as section 8(c)(i) requirements on AI in transportation. A requirement is deemed “in progress” if it has been completed only partially, the extent of progress is ambiguous, or full completion of the requirement necessitates ongoing action. For example, while the Department of State’s pilot program to conduct visa renewal interviews within the United States satisfies section 5.1(a)(i), we marked this requirement as in progress because the action does not fully satisfy 5.1(a)(ii), which requires the continued availability of sufficient visa appointments for AI experts.

Impressive Improvements on Transparency

The federal government has been admirably transparent over the past three months about its implementation efforts related to the AI EO. The White House Fact Sheet, published at the 90-day milestone, collates a list of agency actions that have been completed, alongside their prescribed deadlines. Federal agencies have also issued timely statements regarding their activities. For example, the National Institute of Standards and Technology (NIST), created a dedicated web page that outlines the agency’s responsibilities, deadlines, and news related to AI EO implementation, and Secretary of Commerce Gina Raimondo has made several announcements on EO-related actions.

These proactive disclosures are a significant improvement on previous government efforts to implement AI-related EOs, which prior research by Stanford HAI and the RegLab has found to be inconsistent and poorly disclosed. Last year, Congress responded to the criticisms raised by this research in conjunction with increased media attention by holding hearings and requesting regular updates from agencies regarding their AI governance initiatives. 

The Administration seems to have taken these lessons to heart. The AI EO includes much clearer definitions of tasks, assigned responsibilities, timelines, and reporting requirements. And public accountability mechanisms appear to have spurred efforts to fill the leadership vacuum—both at the White House and at agencies—and address resource shortages in the federal government’s AI policy apparatus. 

Progress Thus Far

In just 90 days, the executive branch has made serious progress. According to the White House, federal agencies have completed 29 actions in response to the AI EO, which we mapped against 31 distinct requirements in our tracker. Of these, 21 requirements were due on or before the 90-day deadline. 

We confirmed implementation progress for 19 of those 21 requirements (90 percent): We verified 11 of them (52 percent) as fully implemented and eight of them (38 percent) as in progress (see methodology above). For two requirements (10 percent), we could not find distinct, conclusive evidence to confirm the White House’s claim of full implementation (see discussion of these requirements in Areas for Improvement below). 

Table showing how many requirements have been implemented, in progress, or couldn't be verified.

The White House Fact Sheet also references 10 additional requirements that do not have deadlines or have deadlines that have not yet passed. We verified four of these requirements as fully implemented and five requirements as in progress, while one requirement could not be verified as implemented or in progress. Considering that many of these tasks are not due for several months, this shows impressive early progress. In addition, we noted early implementation efforts related to eight additional requirements that were not even referenced in the Fact Sheet. 

Table showing a summary of federal entities' implementation of requirements without deadlines or if the deadlines haven't passed.

Of note, the National Science Foundation (NSF) formally launched its pilot of the National AI Research Resource (NAIRR) in partnership with 10 federal agencies and 25 non-governmental partners. As part of the launch, the NSF and the Department of Energy issued an early call for requests to access advanced computing resources. This pilot is a historic first step toward leveling the playing field for equipping researchers with much-needed compute and data resources, thereby strengthening long-term U.S. leadership in AI research and innovation. 

The federal government’s launch of an AI talent surge is another critical step on an issue many think may be the biggest impediment to responsible AI innovation in government. Beyond prioritizing AI specialists through the U.S. Digital Service and Presidential Innovation Fellows, federal agencies like the Department of Homeland Security are pursuing new AI recruitment and talent initiatives. For example, the Office of Personnel Management has authorized government-wide direct hire authority for AI specialists and launched a pooled hiring action for data scientists. The full scale of the AI hiring surge, however, remains unknown (thus we deemed it to still be in progress) and is important given the acute needs for technical talent in government. 

In addition, many agencies have taken first steps to mobilize resources that will allow them to implement more substantive rule-making requirements in the coming months, including establishing two new task forces and publishing seven requests for information or comments. 

Areas for Improvement

Despite substantial improvements over the implementation of previous AI-related EOs, there are still gaps. White House and agency reporting on implementation varies greatly in terms of the level of detail and accessibility. Outside observers may find it difficult to independently verify claims that particular requirements have been completed. 

There are a number of reasonable rationales that counsel against complete transparency in implementation, including concerns related to national security and cybersecurity as well as the tight time constraints imposed by the AI EO. For example, the Department of Homeland Security did not release detailed information pertaining to agency assessments of the risk posed by AI use in critical infrastructure systems required under section 4.3(a)(i); however, full disclosure of these risk assessments or related information could pose national security risks. Even the disclosure of which agencies have not completed their risk assessments could be considered sensitive information if it indicates an agency's lack of readiness. Similarly, the Department of Commerce did not reveal the details of its interactions with, or information provided by, companies regarding dual-use foundation model reporting under the Defense Production Act (DPA), pursuant to sections 4.2(a) of the AI EO. Despite the incomplete information, there are sufficient public statements and independent reporting for us to deem the DPA-related requirements as in progress.

But in other instances where public reporting of implementation details should not implicate security concerns, public reporting remains incomplete. In these cases, publicly available information consists only of high-level statements from the White House and/or agency heads, without additional formal announcements or statements on the relevant entity’s website. For instance, we could not find specific information beyond the Fact Sheet to support the claim that the Department of Transportation, as part of its implementation of section 8(c), established a new Cross-Modal Executive Working Group or directed several councils and committees to provide advice on the safe and responsible use of AI in transportation. And while a senior official at the Department of Health and Human Services (HHS) noted in Congressional testimony that HHS launched the department-wide AI task force required under section 8(b)(i), more information on task force activities is not publicly available. Other times, information may be publicly available but difficult to find. While many agencies have dedicated AI-related web pages, some sites are rarely refreshed and have not been updated in years. 

A centralized resource that details all government action on AI policy could better facilitate accountability, public engagement, and feedback. For previous AI-related EOs, AI.gov served as a repository of information on AI policy activities across agencies. However, as this website has largely been repurposed to advertise the AI talent surge and collate public comments on draft rules, this repository function has been removed.

Looking Ahead

Progress in the first 90 days following the AI EO demonstrates that federal officials have mobilized considerable resources to ensure swift implementation and transparency—helping make the White House’s efforts to lead in AI innovation and governance a reality.

However, momentum must be maintained—and even increased— for the remaining 129 requirements. Another 48 deadlines face agencies within the next 90 days. Though agencies have already made progress on some of these tasks ahead of time, the mandated actions are significant. Notable requirements include a program to mitigate AI-related intellectual property risk, guidelines for the use of generative AI by federal employees, and an evaluation of AI systems’ biosecurity threats. As many submission windows for requests for information close, agencies will also have to digest and translate vast amounts of information into rules and guidance. Notably, the Office of Management and Budget will need to finalize its guidance on the federal government’s use of AI. Meanwhile, many of the requirements the White House has claimed as already implemented will require ongoing attention to achieve the EO’s intended outcomes.

We commend the White House and federal agencies on their transparency in documenting implementation progress. Our efforts to independently verify implementation with publicly available information highlight that there remains room for the government to provide even more detailed and structured information. They also reinforce the value of independent tracking initiatives and other public accountability mechanisms—both of which we will continue to contribute to.

Authors: Caroline Meinhardt is the policy research manager at Stanford HAI. Kevin Klyman is a research assistant at Stanford HAI and an MA candidate at Stanford University. Hamzah Daud is a research assistant at Stanford HAI and an MA candidate at Stanford University. Christie M. Lawrence is a concurrent JD/MPP student at Stanford Law School and the Harvard Kennedy School. Rohini Kosoglu is a policy fellow at Stanford HAI. Daniel Zhang is the senior manager for policy initiatives at Stanford HAI. Daniel E. Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, Professor of Computer Science (by courtesy), Senior Fellow at HAI, Senior Fellow at SIEPR, and Director of the RegLab at Stanford University. 

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics