Skip to main content Skip to secondary navigation
Page Content

Peter Norvig: Today’s Most Pressing Questions in AI Are Human-Centered

The AI expert, who joins Stanford HAI as a Distinguished Education Fellow, discusses building inclusive education and broadening access to students.

Image
A photo of Peter Norvig

AI expert Peter Norvig will join HAI to build out its education programs. | Christopher Michel

Artificial intelligence expert Peter Norvig is joining the Stanford Institute for Human-Centered AI this fall as a Distinguished Education Fellow, with the task of developing tools and materials to explain the key concepts of artificial intelligence. 

Norvig helped launch and build AI at organizations considered innovators in the field: As Google’s director of research, he oversaw the tech giant’s search algorithms and built the teams that focused on machine translation, speech recognition, and computer vision. At NASA Ames, his team created autonomous software that was the first to command a spacecraft, and served as a precursor to the current Mars rovers.

Norvig is also a well-known name in AI education. He co-wrote Artificial Intelligence: A Modern Approach, an introductory textbook used by some 1,500 universities worldwide, and he’s taught hundreds of thousands of students through his courses on online education platform Udacity. 

In this interview, he discusses his move to Stanford, building a human-focused AI curriculum, and broadening access to education. 

We noted the university brain drain in our last AI Index — technology academics leaving universities to join well-resourced companies with massive footprints. What spurred you to go the opposite route?

Throughout my career I’ve gone back and forth between the major top-level domains: .edu, .com, and .gov. After 20 years with one company and after 18 months stuck working from home, I thought it was a good time to try something new, and to concentrate on education.

What does human-centered AI look like in your experience?

One way to think of AI is as a process of optimization — finding the course of action, in an uncertain world, that will result in the maximum expected utility. In the past, the interesting questions were around what algorithm is best for doing this optimization. Now that we have a great set of algorithms and tools, the more pressing questions are human-centered: Exactly what do you want to optimize? Whose interests are you serving? Are you being fair to everyone? Is anyone being left out? Is the data you collected inclusive, or is it biased?

One of HAI’s priorities is creating and building diversity, equity, and inclusion education programs. What kinds of education programs are effective at improving representation in technical AI fields? 

I think there are three distinct problems. 

The first is building a pipeline of qualified people entering the field. This requires an effort to give the untapped population a feeling of belonging and welcome. I was fortunate to benefit from having mentors that not only modeled for me what it would be like to work in the tech field, but also made me think, “This is going to be fun — I want to spend my time being around cool people like this, and they seem to accept me.” I had these opportunities because I grew up in a family that valued education and lived in a university town. For those who didn’t, we need programs and policies to keep them in school, to train their teachers to be better mentors and more knowledgeable about STEM fields, and to give them a sense that there is a path open to them.

The second challenge is fairly finding and evaluating people when it comes to hiring. We see that many companies are broadening their approach, reaching out to more than just the top few schools in their recruiting, and considering applicants with more varied histories.

And third, we need to retain people once hired. You can’t fake it: If some people in a company are unwelcoming, unappreciative, and dismissive of the untapped population, they won’t stay. Companies need to train their employees to realize the value that each other employee brings.

Not everyone has access to classes at Stanford, UC Berkeley, or MIT. How do we broaden access to AI education?

I got involved in online education for just this reason: In 2010 Sebastian Thrun and I taught the intro AI class to Stanford students, and when in 2011 we were asked to teach it again, we thought that we should step up and try to reach a worldwide audience who couldn’t attend Stanford. In one sense this worked great, in that 100,000 students signed up and 16,000 completed the course. But in another sense the approach was still limited to a select group of highly self-motivated learners. The next challenge is to reach people who lack self-confidence, who don’t see themselves as capable of learning new things and being successful, who think of the tech world as being for others, not them. To do this takes more than just having great content in a course; we also need to foster a sense of community through peer-to-peer and mentor-to-learner relationships.

Today we see more programs teaching kids from kindergarten to grade 12 to code. Should we? Is this the right approach for grade school?

Learning to code is a useful skill. When I was in middle school, we didn’t have coding, but I was required to learn touch typing. That was also a useful skill. But learning to type well does not change the way you see the world, and by itself neither does learning the syntax of a programming language. The important part is what you do when you’re coding: moving past small rote-learning exercises to substantial multi-part projects; learning how to choose your own projects; learning to model some aspects of the world, make hypotheses, and test them; committing errors and correcting them without getting discouraged; working on a team; creating something useful that others will use, giving you pride of accomplishment. If you can do all that with coding, great. If you can do it with a no-code or low-code approach to technology, also great. If you can do it by sending kids out into nature to explore and do experiments on their own, equally great. 

What are working professionals missing in their AI education?

In AI education teachers assign a simple well-defined problem with a given dataset and a pre-defined objective. Students then see their job as building a model that maximizes the objective function. But in a real world project, professionals need to define the objectives and collect or generate the data on their own. You don’t get credit for choosing an especially clever or mathematically sophisticated model, you get credit for solving problems for your users. 

You’ve been a leader at several top technology companies. What did you learn on the industry side that enriches your instruction as an educator?

I now have a feel for how large-scale problems are managed and solved in tech companies. I remember once talking to a friend in industry who had co-authored a book with an academic. I asked, “What was the hardest part about writing the book?” The answer was, “When my academic colleague wrote, ‘Big companies must do it this way’ and they were wrong. I had to subtly say, ‘No — guess again’ without revealing proprietary information.” For many such problems, I no longer need to guess.

What advice do you have for Stanford AI students? 

You’re in a great position where you are getting knowledge and experience that you can use to change the world. Make sure you change it for the better.

Learn more about Stanford HAI fellowship opportunities

More News Topics