Technical Performance | The 2026 AI Index Report | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

02

Technical Performance

Computer Vision

A comprehensive overview of AI performance in 2025, spanning image, video, language, speech, reasoning, robotics, and agentic systems.

See Chapter 3

All Chapters

  • Back to Overview
  • 01Research and Development
  • 02Technical Performance
  • 03Responsible AI
  • 04Economy
  • 05Science
  • 06Medicine
  • 07Education
  • 08Policy and Governance
  • 09Public Opinion

1. AI capability is outpacing the benchmarks designed to measure it, and surpassing human-level performance.

Frontier models gained 30 percentage points in a single year on Humanity's Last Exam, a benchmark built to be hard for AI and favorable to human experts. Evaluations intended to be challenging for years are saturated in months, compressing the window in which benchmarks remain useful for tracking progress.

2. Top model performance is converging, with 4 companies now clustered within 25 Elo points (inspired by chess ratings) when rated against one another by human voting in the Arena Leaderboard and benchmark.

As of March 2026, Anthropic (1,503), xAI (1,495), Google (1,494), OpenAI (1,481), Alibaba (1,449), and DeepSeek (1,424) all occupy the top tier of the Arena Elo ratings, shifting competitive pressure toward cost, reliability, and domain-specific performance.

3. The open model performance gap reopened in 2025 after briefly closing in 2024.

As of March 2026, the top closed model leads the top open model by 3.3%, up from 0.5% in August 2024. Six of the top ten models on the Arena Leaderboard are now closed.

4. The U.S.-China AI model performance gap has effectively closed.

U.S. and Chinese models have traded places at the top of performance rankings multiple times since early 2025. In February 2025, DeepSeek-R1 briefly matched the top U.S. model. As of March 2026, the top U.S. model leads by 2.7%, with a gap that fluctuated over the past  year while remaining in the single digits.

5. The benchmarks used to measure AI progress face growing reliability and gaming concerns, with error rates up to 42% on widely used evaluations.

A review found invalid question rates ranging from 2% on MMLU Math to 42% on GSM8K. Separate research suggests that Arena leaderboard standing may partly reflect adaptation to the platform rather than general capability.

6. Video generation models are starting to capture how objects behave, not just produce realistic content.

Google DeepMind’s Veo 3, tested across more than 18,000 generated videos, demonstrated abilities like simulating buoyancy and solving mazes without being trained on those tasks.

Source: Video models are zero-shot learners and reasoners

7. AI models can win a gold medal at the International Mathematical Olympiad but still can’t reliably tell time, illustrating what researchers call jagged intelligence.

Gemini Deep Think scored 35 points (gold) at the 2025 IMO, working end to end in natural language within the 4.5-hour time limit, up from the 28-point silver achieved in 2024. On ClockBench, the top model read analog clocks correctly 50.1% of the time, compared with 90.1% for humans.

8. AI models are expanding into professional domains, showing performance ranging from 60 to 90% in evaluations in tax, mortgage processing, corporate finance, and legal reasoning.

The performance of  the top 15 models is separated by as little as 3 percentage points in each benchmark.  These kinds of domains where high competency and reliability are required remain a great challenge for AI models.

9. AI agents advanced from answering questions to completing tasks in 2025, though they still fail roughly one in three attempts on structured benchmarks.

On OSWorld, which tests agents on computer tasks across operating systems, accuracy rose from roughly 12% to 66.3%, within 6 percentage points of human performance. 

10. Robots still fail at most household tasks, even as they excel in controlled environments.

Robots succeed in only 12% of real household tasks, highlighting how far AI is from mastering the physical world. On RLBench, robotic manipulation in software-based simulations has reached 89.4% success, but the gap between predictable lab settings and unpredictable household environments is wide.

11. Autonomous vehicles reached mass-scale deployment in 2025.

Waymo reached approximately 450,000 weekly trips across five U.S. cities. In China, Apollo Go completed 11 million fully driverless rides, a 175% year-over-year increase. European operators are active but comparable deployment data is not publicly available, limiting the global picture. Deployments so far are in areas with generally favorable weather and humans are available off-site to take over when necessary.


Support the Stanford Institute for Human-Centered AI (HAI) in our mission to advance ethical and impactful advancements in artificial intelligence.

Your support helps foster research, education, policy, and collaboration across diverse fields. Whether you are an individual, a corporation, a foundation, or a family office, together we can ensure AI serves humanity’s best interests.

Make a Gift to AI Index