Skip to main content Skip to secondary navigation
Page Content

What’s Next in Artificial Intelligence? Three Key Directions

The HAI Spring Conference convenes experts in foundation models, the simulated world, and accountable AI to discuss the future of AI.

Image
image of wavey lines in pink and blue

After a long winter, the artificial intelligence field has seen a resurgence in the past 15 years as computer power increased and a lot of digital data became available. In the past few years alone, giant language models advanced so quickly to outpace benchmarks, computer vision capabilities took self-driving cars from the lab to the street, and generative models tested democracies during major elections. 

But parallel to this technology’s rapid rise is its potential for massive harm; technologists, activists, and academics alike began calling for better regulation and understanding of its impact.

This spring, Stanford Institute for Human-Centered AI (HAI) will address three of the most critical areas of artificial intelligence during a one-day conference free and open to all: 

  • Foundation models: The giant models trained on broad data that can be adapted to a wide range of downstream tasks.
  • Physical/simulated world: How can we use simulated worlds to enable the training of embodied, grounded AI models and how can what is learned in simulation be transferred to the physical world?
  • Accountable AI: As AI interacts with individuals and societies, how can we make its decisions interpretable and how can we use AI in a manner consistent with privacy needs?

Stanford HAI Associate Director and linguistics and computer science professor Christopher Manning, who will cohost the event with HAI Denning Co-director and computer science professor Fei-Fei Li, explains what this conference will cover and who should attend.

This conference will look at key advances in AI. Why are we focusing on foundation models, accountable AI, and embodied AI? What makes these the areas where you expect major growth?

An enormous amount of work is going on in AI in many directions. For a one-day event, we wanted to focus in on a small number of areas that we felt were key to where the most important and exciting research might appear this decade. We ended up focusing on three areas. First, there has been enormous excitement and investment around the development of large pre-trained language models and their generalization to including multiple data modalities that we have named foundation models. Second, there has been an exciting resurgence of work linking AI and robotics, often enabled by the use of simulated worlds, which allow the exploration of embodied AI and grounding. Finally, the increasing concerns about understanding AI decisions and maintaining data privacy in part demand societal and regulatory solutions, but they are also an opportunity for technical AI advances as to how you can produce interpretable AI systems or systems that still work effectively on data that is obscured for privacy reasons. 

Who are you excited to hear from?

Ilya Sutskever has been one of the central people at the heart of the resurgence of deep learning-based AI, starting from his breakthrough work on the computer vision system AlexNet with Geoff Hinton in 2012. His impact has grown since he became the chief scientist of Open AI, which among other things has led in the development of foundation models. I’m looking forward to hearing more about their latest models such as InstructGPT and what he sees lying ahead.

The recent successes in AI just would not have been possible without the amazing breakthroughs in parallel computing largely led by NVIDIA. Bill Dally is a leader in computer architecture, and, for the last decade, he has been the chief scientist at NVIDIA. He can give us powerful insights into the recent and future advances in parallel computing via GPUs but also insights into the broader range of vision, virtual reality, and other AI research going on at NVIDIA.

And Hima Lakkaraju is a trailblazing Harvard professor developing new strands of work in trustworthy and interpretable machine learning. When AI models are used in high-stakes settings, most times people would like accurate and reliable explanations of why the systems make certain decisions. One exciting direction in Hima’s work is in developing formal Bayesian models that can give reliable explanations.

Who should attend this conference?

Through a combination of short talks and panel discussions, we’re trying to achieve a balance between technical depth and accessibility. So on the one hand this conference should be of interest to anyone working in AI as a student, researcher, or developer, but beyond that we hope to be able to convey some of the excitement, results, and progress in these areas to anybody with an interest in AI, whether as a scientist, decision maker, or concerned citizen.

What do you hope your audience will take away from this experience? 

I hope the audience will get a deeper understanding of how AI has been able to advance so quickly in the last 15 years, where it might go next, and what we should and shouldn’t worry about. I hope people will take away the awesome powers of the huge new foundation models that are being built. But equally they will see why building a model from mountains of digital data is not sufficient, and we want to explore embodied AI models in a physical or simulated world that can learn more as babies learn. And finally, we will see something about how there is now a lot of exciting technical work underway to address the worries and downsides of AI that have been very prominently covered in the media in recent years.

Interested in attending the 2022 HAI Spring Conference? Learn more or register.

More News Topics