White Paper
Image
January 17, 2025
Assessing the Implementation of Federal AI Leadership and Compliance Mandates
Jennifer Wang, Mirac Suzgun, Caroline Meinhardt, Daniel Zhang, Kazia Nowacki, Daniel E. Ho
This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.
Executive Summary
- Recent executive actions (i.e., Executive Order 14110 [AI EO] on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and OMB M-24-10 [M-Memo] on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence) have aimed to improve innovation and governance of AI in government.
- Understanding the current state of implementation is vital to evaluating, improving, and solidifying the government’s efforts to lead and govern in AI. Prior assessments of earlier AI governance directives (i.e., Executive Order [EO] 13859 on AI Leadership, EO 13960 on AI in Government, and the AI in Government Act of 2020) revealed weak and inconsistent implementation that could impact the federal government’s ability to harness AI responsibly.
- We assess the implementation of requirements for agencies to (a) appoint Chief AI Officers (CAIOs) and (b) issue plans for internal strategic planning and governance (Compliance Plans), as well as (c) make associated budgetary requests. We examine implementation at 266 federal agencies, as well as subsets of agencies that are singled out in directives (e.g., the 24 Chief Financial Officers [CFO] Act agencies) and are large in size as designated by the Office of Personnel Management (i.e., 11 distinct non-CFO Act agencies with 1,000 or more employees). All the analysis presented in this paper is based on publicly available data as of October 20, 2024.
- Relative to prior AI-related legal requirements, White House leadership and agencies have significantly improved their implementation of these legal and policy requirements. For instance, while only 12 percent of covered agencies had published Agency AI Plans under EO 13859, 86 percent of (CFO Act and large independent) agencies submitted Compliance Plans or written determinations about AI use as mandated under the M-Memo.
- Notwithstanding this progress, we identify notable areas for improvement to achieve consistency with executive directives.
- We recommend greater public visibility and conceptualization of the CAIO role. Thus far, 30 percent of 266 agencies (80 agencies) have publicly disclosed their CAIOs on official government websites, with AI.gov centrally identifying these CAIOs. Ninety-four percent of CFO Act and large independent agencies have publicly disclosed their CAIOs.
- Of the publicly announced CAIOs, 89 percent are “dual hatted,” meaning they are officials with principal existing responsibilities (e.g., Chief Information Officer, Chief Data Officer) who received the additional assignment of CAIO.
- Nearly all CAIOs are internal appointments: Only one agency brought in a nongovernmental official.
- The education and professional backgrounds of CAIOs vary widely. Forty-five of 80 CAIOs (56 percent) have documented and identifiable experience in technology outside of government, but a minority of these appear to have been in AI-focused domains.
- The implementation of Agency Compliance Plans, although showing a significant improvement over prior assessments, has not been perfect. Fifty-five agencies publicly released Compliance Plans or written determinations around AI use, but the degree of detail, transparency, and focus of these plans varies significantly.
- Nineteen of the 24 CFO Act agencies completed and publicly posted their Compliance Plans by the deadline, and all did so within several weeks after the OMB deadline.
- Three of the 11 other large independent agencies, as classified by OPM, published their Compliance Plans or written declarations of no AI use by the original deadline, while three more did so within weeks following the deadline.
- Of agencies that filed Compliance Plans (42), 37 agencies (88 percent) reference the establishment of an internal AI governance body.
- The majority of them (28 agencies, 67 percent) identified barriers to responsible use of AI in their Compliance Plans. The most common barriers include resource constraints, workforce and expertise, and technical infrastructure.
- While 25 agencies (60 percent) reported developing internal guidance for generative AI use, only 14 agencies (33 percent) specified establishing safeguards and oversight mechanisms and 9 agencies (21 percent) detailed how these safeguards are implemented.
- The level of funding requested for the new mandates and AI activities varied widely. Sixty-five percent of agencies have not specifically requested funding for AI initiatives in their FY 2025 congressional budget justification, although we acknowledge that uncertainty in the budgeting environment may have played a factor in this.
- Those that have, on average, requested $270 thousand to support the operations of their CAIO offices. The major outlier is the Department of Defense, which proposed a budget of $435 million for its Chief Digital and AI Officer.
- While much more attention has been placed on implementing executive directives, our results confirm earlier findings that a “whole-of-government” approach to AI innovation continues to require senior-level leadership that shepherds consistent compliance across distinct government agencies.
Read the full White PaperView all Policy Publications