Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
2023 AI Index: A Year of Technical Achievement, Newfound Public Scrutiny | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

2023 AI Index: A Year of Technical Achievement, Newfound Public Scrutiny

Date
April 03, 2023
Topics
Economy, Markets
Education, Skills
Machine Learning

The latest report highlights benchmark saturation, new legislation, and scientific impact.

AI has reached new and impressive technical capabilities and is starting to be incorporated into everyday life, according to the 2023 AI Index, an annual study of trends in AI at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The technology has surpassed many benchmarks, leading researchers to reevaluate some of the very ways in which it should be tested and forcing the broader public to think more critically of its associated ethical challenges. 

The AI Index, led by an independent and interdisciplinary group of AI leaders from across academia and industry, is one of the most comprehensive reports on the impact and progress of AI. The AI Index tracks and evaluates AI progress through a wide range of perspectives, looking at trends in research and development, technical performance, ethics, economics, policy, public opinion, and education. The report helps to ground the AI conversation in data, enabling decision-makers to take meaningful action to advance AI in responsible and ethical ways.

The new report shows several key trends in 2022:

  • AI continued to post state-of-the-art results on many benchmarks, but year-over-year improvements on several are marginal. Moreover, the speed at which benchmark saturation was being reached increased. Many traditional benchmarks, like ImageNet and SQuAD, that have been used to gauge AI progress no longer seem sufficient. New, more comprehensive benchmarking suites such as BIG-bench and HELM were released to challenge these increasingly capable AI systems. 

  • Generative models such as DALL-E 2, Stable Diffusion, and ChatGPT became part of the zeitgeist. These showed impressive capability but raised ethical issues. Text-to-image generators are routinely biased along gender dimensions, and chatbots like ChatGPT can deliver misinformation or be used for nefarious purposes.

  • Large language models, which have driven much recent AI progress, are getting bigger and more expensive. For example, PaLM, one of the flagship models released in 2022, cost 160 times more and was 360 times larger than GPT-2, one of the first large language models launched in 2019.

  • AI is helping to accelerate scientific progress. In 2022, AI models were used to control hydrogen fusion, improve the efficiency of matrix manipulation, and generate new antibodies. AI has also started building better AI. Nvidia used an AI reinforcement learning agent to improve the design of the chips that power AI systems. Similarly, Google recently used one of its large language models, PaLM, to suggest ways to improve the very same model.

AI’s impressive technical progress has captured the attention of policymakers, industry leaders, and the public alike, although 2022 was the first time in a decade where AI investment levels cooled. More specifically:

  • An analysis of the legislative proceedings of 127 countries showed that the number of bills containing “artificial intelligence” passed into law grew from just 1 in 2016 to 37 in 2022. These laws ranged from mitigating the risks of AI-led automation to using AI for weather forecasting. 

  • The proportion of companies adopting AI has plateaued over the past few years; however, the companies that have adopted AI continue to pull ahead. Companies that have embedded AI into their business offerings have realized both cost decreases and revenue increases. The AI capabilities most likely to be embedded by businesses are robotic process automation, computer vision, and virtual agents. 

  • AI-related public opinion varies greatly by country. Chinese citizens feel much more positively about the benefits of AI products and services than Americans. Americans are excited about AI’s potential to make society better, save time, and improve efficiency but are concerned about labor automation, surveillance, and decreases in human connection. 

  • For the first time in the last decade, year-over-year private investment in AI decreased. Global AI private investment was $91.9 billion in 2022, a 26.7% decrease from 2021. The total number of AI-related funding events as well as the number of newly funded AI companies likewise decreased. Still, AI private investment was 18 times greater than in 2013. 

“We are in a time of enormous excitement – even hype – around AI,” said Katrina Ligett, professor in the School of Computer Science and Engineering at the Hebrew University and a member of the AI Index Steering Committee. “This makes it all the more important that information like that contained in the AI Index is available to decision-makers and to the general public, to allow us to ground more debates in facts, and to highlight the areas where data about AI and its reach and impacts is not available.”

The AI Index collaborates with many different organizations to track progress in artificial intelligence. These include the Center for Security and Emerging Technology at Georgetown University, LinkedIn, NetBase Quid, Lightcast, and McKinsey. The 2023 report also features more data and analysis original to the AI Index team than ever before. This year’s report included new analysis on foundation models, including their countries of origin and training costs, the environmental impact of AI systems, K-12 AI education, and public opinion trends in AI. The AI Index also broadened its tracking of global AI legislation from 25 countries in 2022 to 127 in 2023. 

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.

Share
Link copied to clipboard!
Authors
  • headshot
    Nestor Maslej

Related News

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

Stanford HAI and Swiss National AI Institute Form Alliance to Advance Open, Human-Centered AI
Jan 22, 2026
Announcement
Your browser does not support the video tag.

Stanford, ETH Zurich, and EPFL will develop open-source foundation models that prioritize societal values over commercial interests, strengthening academia's role in shaping AI's future.

Announcement
Your browser does not support the video tag.

Stanford HAI and Swiss National AI Institute Form Alliance to Advance Open, Human-Centered AI

Education, SkillsJan 22

Stanford, ETH Zurich, and EPFL will develop open-source foundation models that prioritize societal values over commercial interests, strengthening academia's role in shaping AI's future.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?”