Validating Claims About AI: A Policymaker’s Guide | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
policyPolicy Brief

Validating Claims About AI: A Policymaker’s Guide

Date
September 24, 2025
Topics
Foundation Models
Privacy, Safety, Security
Read Paper
abstract

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Key Takeaways

  • AI companies often use benchmarks to test their systems on narrow tasks but then make sweeping claims about broad capabilities like “reasoning” or “understanding.” This gap between testing and claims is driving misguided policy decisions and investment choices.

  • Our systematic, three-step framework helps policymakers separate legitimate AI capabilities from unsupported claims by outlining key questions to ask: What exactly is being claimed? What was actually tested? And do the two match?

  • Even rigorous benchmarks can mislead: We demonstrate how the respected GPQA science benchmark is often used to support inflated claims about AI reasoning abilities. The issue is not just bad benchmarks; it is how results are interpreted and marketed.

  • High-stakes decisions about AI regulation, funding, and deployment are already being made based on questionable interpretations of benchmark results. Policymakers should use this framework to demand evidence that actually supports the claims being made.

Executive Summary

When OpenAI claims GPT-4 shows “human-level performance” on graduate exams, or when Anthropic says Claude demonstrates “graduate-level reasoning capabilities,” how can policymakers verify these claims are valid? The impact of these assertions goes far beyond company press releases. Potential claims made on benchmark results are increasingly influencing regulatory decisions, investment flows, and model deployment in critical systems.

The problem is one of overstating claims: Companies test their AI models on narrow tasks (e.g., multiple-choice science questions) but then make sweeping claims about broad capabilities based on these narrow task results (e.g., models exhibiting broader “reasoning” or “understanding” based on Q&A benchmarks). Consequently, policymakers and the public are left with limited, potentially misleading assessments of the capabilities of the AI systems that are increasingly permeating their everyday lives and society’s safety-critical processes. This pattern appears across AI evaluations more broadly. For example, we may incorrectly conclude that if an AI system accurately solves a benchmark of International Mathematical Olympiad (IMO) problems, it has reached human-expert-level mathematical reasoning. However, this capability also requires common sense, adaptability, metacognition, and much more beyond the scope of the narrow evaluation based on IMO questions. Yet such overgeneralizations are common.

In our paper “Measurement to Meaning: A Validity-Centered Framework for AI Evaluation,” we propose a practical and structured approach that cuts through the hype by asking three simple questions: What exactly is someone claiming about their AI system? What did they actually test using a benchmark? And what is the evidence that their claim is valid based on that test? We focus on five key types of validity that are most relevant for evaluating AI systems today.

Policymakers must assess a growing number of claims about AI systems, including, but not limited to, their capabilities, risks, and societal impacts. We aim to provide policymakers and the public with a formalized, scientifically grounded way to investigate which claims about an AI model are supported — and which aren’t. The validation framework presented in this brief is designed to evaluate all such claims. However, given the recent surge in capability claims from AI developers, this brief focuses on how to validate capability-related claims. 

Model benchmarks serve as a powerful tool to evaluate AI systems. However, policymakers must work with developers and researchers to more rigorously define, report, and understand evaluations. Our targeted approach demonstrates how to use this systematic, evidence-based framework to cut through the hype and ensure policy decisions are based on solid ground and avoid tremendous miscalculations.

Introduction

Benchmarks have long helped align academia, industry, and other stakeholders around defining criteria to measure progress in specific AI systems. Evaluations have primarily aimed at measuring scientific progress — for example, performance on ImageNet, a large-scale image classification benchmark, has been viewed as an indicator of general scientific progress in AI methods. When new optimizers, architectures, or training procedures perform better on benchmarks, they also tend to lead to the development of better models across other tasks. 

Today, the focus of evaluation has expanded from benchmarking methods to benchmarking models themselves, where benchmark performance is now taken as a proxy for real-world utility, often without sufficient evidence that this proxy relationship holds. Benchmark performance does not always equal reliable real-world performance or trustworthy decision-making. Model performance on a single benchmark can be overstated by conflating correlation with causation, discounting distribution shifts (where the statistical distribution of data changes between training and deployment), and downplaying the challenges with causal representation (understanding internal behavior based on observed data).

Foundation models, which can operate across diverse tasks out of the box, further complicate the translation of narrow measurements into broad conclusions. Foundation models are not trained — and rarely tested — with a specific task in mind. Instead, in the absence of such concrete use cases, model developers try to test for more general (and often abstract) skills of these general-purpose models, such as “reasoning” or “intelligence,” which they assume would be helpful across a variety of tasks to predict broad and diverse downstream utility. However, designing meaningful, valid tests for such abstract capabilities is much harder than designing an evaluation that tests if the model is good at one specific task. Collectively, these trends and tendencies increase the likelihood that companies and researchers may intentionally or unintentionally overstate a model’s capabilities.

Our paper builds on prior literature by explicitly arguing that validity (i.e., the degree to which evidence and theory support the interpretations of test scores) depends not just on the measurement and evaluation of a model, but also on the claim that is being made about its capabilities. We lay out a three-step validation process for testing capability claims about AI models. While our framework can also be applied to testing other claims, such as about AI models’ risks or other downstream impacts, we focus in this brief specifically on testing claims about AI model capabilities. 

First, we must decide the object of our claim: Is it a criterion (i.e., something that can directly be measured, such as arithmetic accuracy) or a construct (i.e., something abstract that cannot directly be measured, such as “intelligence”)? Second, we must explicitly state the claim — that is, what we want to say about the criterion (e.g., “model A can be used as a calculator”) or the construct (e.g., “model A is intelligent”). Third, we must identify or perform experiments to gather evidence and assess whether it supports the desired claim (e.g., calculator functions may mean arithmetic accuracy, but high intelligence is unlikely) — or, in the case of reported benchmarks, decide if the benchmark truly supports our (or a model developer’s) claim. Aligning what is measured, how it is interpreted, and the overarching claim is central to establishing validity.

Read Paper
Share
Link copied to clipboard!
Authors
  • Olawale Salaudeen
    Olawale Salaudeen
  • Anka Reuel
    Anka Reuel
  • Angelina Wang
    Angelina Wang
  • Sanmi Koyejo
    Sanmi Koyejo
Related
  • What Makes a Good AI Benchmark?
    Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
    Quick ReadDec 11
    policy brief
    What Makes a Good AI Benchmark

    This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

Related Publications

Beyond DeepSeek: China's Diverse Open-Weight AI Ecosystem and Its Policy Implications
Caroline Meinhardt, Sabina Nong, Graham Webster, Tatsunori Hashimoto, Christopher Manning
Deep DiveDec 16, 2025
Issue Brief

Almost one year after the “DeepSeek moment,” this brief analyzes China’s diverse open-model ecosystem and examines the policy implications of their widespread global diffusion.

Issue Brief

Beyond DeepSeek: China's Diverse Open-Weight AI Ecosystem and Its Policy Implications

Caroline Meinhardt, Sabina Nong, Graham Webster, Tatsunori Hashimoto, Christopher Manning
Foundation ModelsInternational Affairs, International Security, International DevelopmentDeep DiveDec 16

Almost one year after the “DeepSeek moment,” this brief analyzes China’s diverse open-model ecosystem and examines the policy implications of their widespread global diffusion.

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee
Jennifer King
Quick ReadNov 18, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Testimony

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee

Jennifer King
Privacy, Safety, SecurityQuick ReadNov 18

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy
Riana Pfefferkorn
Quick ReadJul 21, 2025
Policy Brief

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Policy Brief

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Riana Pfefferkorn
Privacy, Safety, SecurityEducation, SkillsQuick ReadJul 21

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Issue Brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Regulation, Policy, GovernancePrivacy, Safety, SecurityQuick ReadJun 30

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.