Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI Index 2025: State of AI in 10 Charts | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

AI Index 2025: State of AI in 10 Charts

Date
April 07, 2025
Topics
Economy, Markets
Finance, Business
Foundation Models
Generative AI
Industry, Innovation
Regulation, Policy, Governance

Small models get better, regulation moves to the states, and more.

The new AI Index Report shows a maturing field, improvements in AI optimization, and a growing saturation of use - and abuse - of this technology.

The 2025 AI Index Report, published on April 7, 2025, is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. 

Each year, the report covers the biggest technical advances, new achievements in benchmarking, investment flowing into generative AI, education trends, legislation around this technology, and more.

Read the full report here, or see below for 10 key takeaways:

Smaller Models Get Better

In 2022, the smallest model registering a score higher than 60% on the Massive Multitask Language Understanding (MMLU) benchmark was PaLM, with 540 billion parameters. By 2024, Microsoft’s Phi-3-mini, with just 3.8 billion parameters, achieved the same threshold. This represents a 142-fold reduction in over two years.

Models Become Cheaper to Use

The cost of querying an AI model that scores the equivalent of GPT-3.5 (64.8% accuracy) on MMLU dropped from $20 per million tokens in November 2022 to just $0.07 per million tokens by October 2024 (Gemini-1.5-Flash-8B)—a more than 280-fold reduction in approximately 18 months. Depending on the task, LLM inference prices have fallen anywhere from 9 to 900 times per year.

China’s Models Catch Up

The U.S. still leads in producing top AI models—but China is closing the performance gap. In 2024, U.S.-based institutions produced 40 notable AI models, compared to China’s 15 and Europe’s three. While the U.S. maintains its lead in quantity, Chinese models have rapidly closed the quality gap: performance differences on major benchmarks such as MMLU and HumanEval shrank from double digits in 2023 to near parity in 2024. China also continues to lead in AI publications and patents.

A Jump in Problematic AI

According to one index tracking AI harm, the AI Incidents Database, the number of AI-related incidents rose to 233 in 2024—a record high and a 56.4% increase over 2023. Among the incidents reported were deepfake intimate images and chatbots allegedly implicated in a teenager’s suicide. While this isn’t comprehensive, it does show a staggering increase in issues.

The Rise of More Useful Agents

AI agents show early promise. The launch of RE-Bench in 2024 introduced a rigorous benchmark for evaluating complex tasks for AI agents. In short time-horizon settings (two hours), top AI systems score four times higher than human experts, but when given more time to do a task, humans perform better than AI—outscoring it 2-to-1 at 32 hours. Still, AI agents already match human expertise in select tasks, such as writing specific types of code, while delivering results faster. 

Sky-High AI Investment

The U.S. widened its commanding lead in global AI investment. U.S. private AI investment hit $109 billion in 2024, nearly 12 times higher than China's $9.3 billion and 24 times the UK's $4.5 billion. The gap is even more pronounced in generative AI, where U.S. investment exceeded the combined European Union and UK total by $25.5 billion, up from a $21.1 billion gap in 2023.

AI Goes Corporate

Businesses are turning to AI. In 2024, the proportion of survey respondents reporting AI use by their organizations jumped to 78% from 55% in 2023. Similarly, the number of respondents who reported using generative AI in at least one business function more than doubled—from 33% in 2023 to 71% last year. 

Health AI Floods the FDA

The number of FDA-approved, AI-enabled medical devices skyrocketed. The FDA authorized its first AI-enabled medical device in 1995. By 2015, only six such devices had been approved, but the number spiked to 223 by 2023. 

In U.S., Regulation Moves to the States

U.S. states are leading the way on AI legislation amid slow progress at the federal level. In 2016, only one state-level AI-related law was passed, increasing to 49 by 2023. In the past year alone, that number more than doubled to 131. While proposed AI bills at the federal level have also increased, the number passed remains low.

Asia Shows more AI Optimism

Regional differences persist regarding AI optimism. A large majority of people believe AI-powered products and services offer more benefits than drawbacks in countries like China (83%), Indonesia (80%), and Thailand (77%), while only a minority share this view in Canada (40%), the United States (39%), and the Netherlands (36%).

Want more? Dig into the full report, or test your knowledge with the 2025 AI Index quiz.

Share
Link copied to clipboard!
Authors
  • headshot
    Nestor Maslej

Related News

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?”