Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
What to Expect in 2023 in AI | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

What to Expect in 2023 in AI

Date
December 14, 2022
DALL-E

HAI faculty share their predictions for the coming year.

This year’s biggest headline might have been generative AI, but what should we expect from the field in 2023? Four Stanford HAI faculty members describe what they expect the biggest advances, opportunities, and challenges will be for the coming year. 

Better Foundation Models

Foundation models – giant models that can be used for a variety of downstream tasks without additional training – have been seeing huge progress, and that will only improve next year, says Chris Manning, the Thomas M. Siebel Professor in Machine Learning in the School of Engineering, professor of linguistics and of computer science, director of the Stanford Artificial Intelligence Laboratory, and associate director of Stanford HAI. He expects to see improvements in data and data curation – “not just bigger data collections, but large efforts into improving the quality of the data and cleaning out toxic or biased information that comes from random trawls of the web.”

One area he expects to see growth: sparse models. A sparse model is a way of representing complex data in a more efficient or compact way, which can be faster to compute and require less memory to store. 

“Generally, I expect to see algorithmic advances that let you have more scale,” he says. 

Video’s Generative Moment 

While text and image generative AI was this year’s big story, video will be a big focus in 2023, says Percy Liang, associate professor of computer science and director of Stanford HAI’s Center for Research on Foundation Models. “Capturing long-range dependencies is challenging, but technology will continue to get better, at least with shorter videos to start,” he says. “We may be getting to a point next year where we won’t be able to distinguish whether a human or computer generated a video. Up to today, if you watch a video, you expect it to be real, but we’re seeing that hard line start to evaporate.”

Changing Ecosystem, More Government Funding

What does a healthy AI field look like? Fei-Fei Li, the Sequoia Capital Professor at Stanford University, professor of computer science, and co-director of Stanford HAI, notes that too many startups are still depending on the stability of open models, unable to develop their own. But with the major attention on foundation models this year and venture money flowing, she expects to see more players come to the field in 2023.

Compute and data are bottlenecks for startups, though, so the federal government may step up investment in compute resources like a National Research Cloud or a Multilateral AI Research Institute. “There’s concern that startups, which would make the ecosystem more vibrant and diverse, aren’t getting enough resources,” she says.

Immature AI Proliferates

2023 will see a “shocking rollout of AI way before it’s mature or ready to go,” says Russ Altman, the Kenneth Fong Professor in the School of Engineering; professor of bioengineering, of genetics, of medicine, and of biomedical data science; and associate director of Stanford HAI. “I’m worried that our current government paralysis is not going to move forward on any kind of meaningful regulation, and some areas certainly need regulation.” 

He points to the recent proposal in San Francisco to allow police to deploy potentially lethal remote-controlled robots or the potential misuse of tools that can generate human-like text from a short prompt – think how many smart fifth-graders could skip an essay assignment by asking an agent for help, he says.

For 2023, “I expect a hit parade of AI that’s not ready for prime time but coming out because it’ll be driven by over-zealous industry,” Altman says. “In some ways, it will make the whole mission of HAI more relevant and critical.”

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

DALL-E
Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch

Related News

AI Challenges Core Assumptions in Education
Shana Lynch
Feb 19, 2026
News

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

News

AI Challenges Core Assumptions in Education

Shana Lynch
Education, SkillsGenerative AIPrivacy, Safety, SecurityFeb 19

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

AI Sovereignty’s Definitional Dilemma
Juan Pava, Caroline Meinhardt, Elena Cryst, James Landay
Feb 17, 2026
News
illustration showing world and digital lines and binary code

Governments worldwide are racing to control their AI futures, but unclear definitions hinder real policy progress.

News
illustration showing world and digital lines and binary code

AI Sovereignty’s Definitional Dilemma

Juan Pava, Caroline Meinhardt, Elena Cryst, James Landay
Government, Public AdministrationRegulation, Policy, GovernanceInternational Affairs, International Security, International DevelopmentFeb 17

Governments worldwide are racing to control their AI futures, but unclear definitions hinder real policy progress.

America's 250 Greatest Innovators: Celebrating The American Dream
Forbes
Feb 11, 2026
Media Mention

HAI Co-Director Fei-Fei Li named one of America's top 250 greatest innovators, alongside fellow Stanford affiliates Rodney Brooks, Carolyn Bertozzi, Daphne Koller, and Andrew Ng.

Media Mention
Your browser does not support the video tag.

America's 250 Greatest Innovators: Celebrating The American Dream

Forbes
Computer VisionGenerative AIFoundation ModelsEnergy, EnvironmentEthics, Equity, InclusionFeb 11

HAI Co-Director Fei-Fei Li named one of America's top 250 greatest innovators, alongside fellow Stanford affiliates Rodney Brooks, Carolyn Bertozzi, Daphne Koller, and Andrew Ng.