Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
What’s Next in Artificial Intelligence? Three Key Directions | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

What’s Next in Artificial Intelligence? Three Key Directions

Date
March 11, 2022
Topics
Machine Learning

The HAI Spring Conference convenes experts in foundation models, the simulated world, and accountable AI to discuss the future of AI.

After a long winter, the artificial intelligence field has seen a resurgence in the past 15 years as computer power increased and a lot of digital data became available. In the past few years alone, giant language models advanced so quickly to outpace benchmarks, computer vision capabilities took self-driving cars from the lab to the street, and generative models tested democracies during major elections. 

But parallel to this technology’s rapid rise is its potential for massive harm; technologists, activists, and academics alike began calling for better regulation and understanding of its impact.

This spring, Stanford Institute for Human-Centered AI (HAI) will address three of the most critical areas of artificial intelligence during a one-day conference free and open to all: 

  • Foundation models: The giant models trained on broad data that can be adapted to a wide range of downstream tasks.

  • Physical/simulated world: How can we use simulated worlds to enable the training of embodied, grounded AI models and how can what is learned in simulation be transferred to the physical world?

  • Accountable AI: As AI interacts with individuals and societies, how can we make its decisions interpretable and how can we use AI in a manner consistent with privacy needs?

Stanford HAI Associate Director and linguistics and computer science professor Christopher Manning, who will cohost the event with HAI Denning Co-director and computer science professor Fei-Fei Li, explains what this conference will cover and who should attend.

This conference will look at key advances in AI. Why are we focusing on foundation models, accountable AI, and embodied AI? What makes these the areas where you expect major growth?

An enormous amount of work is going on in AI in many directions. For a one-day event, we wanted to focus in on a small number of areas that we felt were key to where the most important and exciting research might appear this decade. We ended up focusing on three areas. First, there has been enormous excitement and investment around the development of large pre-trained language models and their generalization to including multiple data modalities that we have named foundation models. Second, there has been an exciting resurgence of work linking AI and robotics, often enabled by the use of simulated worlds, which allow the exploration of embodied AI and grounding. Finally, the increasing concerns about understanding AI decisions and maintaining data privacy in part demand societal and regulatory solutions, but they are also an opportunity for technical AI advances as to how you can produce interpretable AI systems or systems that still work effectively on data that is obscured for privacy reasons. 

Who are you excited to hear from?

Ilya Sutskever has been one of the central people at the heart of the resurgence of deep learning-based AI, starting from his breakthrough work on the computer vision system AlexNet with Geoff Hinton in 2012. His impact has grown since he became the chief scientist of Open AI, which among other things has led in the development of foundation models. I’m looking forward to hearing more about their latest models such as InstructGPT and what he sees lying ahead.

The recent successes in AI just would not have been possible without the amazing breakthroughs in parallel computing largely led by NVIDIA. Bill Dally is a leader in computer architecture, and, for the last decade, he has been the chief scientist at NVIDIA. He can give us powerful insights into the recent and future advances in parallel computing via GPUs but also insights into the broader range of vision, virtual reality, and other AI research going on at NVIDIA.

And Hima Lakkaraju is a trailblazing Harvard professor developing new strands of work in trustworthy and interpretable machine learning. When AI models are used in high-stakes settings, most times people would like accurate and reliable explanations of why the systems make certain decisions. One exciting direction in Hima’s work is in developing formal Bayesian models that can give reliable explanations.

Who should attend this conference?

Through a combination of short talks and panel discussions, we’re trying to achieve a balance between technical depth and accessibility. So on the one hand this conference should be of interest to anyone working in AI as a student, researcher, or developer, but beyond that we hope to be able to convey some of the excitement, results, and progress in these areas to anybody with an interest in AI, whether as a scientist, decision maker, or concerned citizen.

What do you hope your audience will take away from this experience? 

I hope the audience will get a deeper understanding of how AI has been able to advance so quickly in the last 15 years, where it might go next, and what we should and shouldn’t worry about. I hope people will take away the awesome powers of the huge new foundation models that are being built. But equally they will see why building a model from mountains of digital data is not sufficient, and we want to explore embodied AI models in a physical or simulated world that can learn more as babies learn. And finally, we will see something about how there is now a lot of exciting technical work underway to address the worries and downsides of AI that have been very prominently covered in the media in recent years.

Interested in attending the 2022 HAI Spring Conference? Learn more or register.

Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Spatial Intelligence Is AI’s Next Frontier
TIME
Dec 11, 2025
Media Mention

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Media Mention
Your browser does not support the video tag.

Spatial Intelligence Is AI’s Next Frontier

TIME
Computer VisionMachine LearningGenerative AIDec 11

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.