Skip to main content Skip to secondary navigation
Page Content

HAI at Age 2: From Dinner Conversation to Global Institute

Stanford GSB Dean Jon Levin reflects on the creation of HAI as a multidisciplinary approach to a multilayered challenge.

Image
Two attendees sit near a HAI sign at a HAI event.

Ryan Zhang

Just a little over two years ago, the idea of an institute focused on human-centered uses of artificial intelligence was a dinner series conversation, imagined by an exceptionally passionate group of Stanford faculty and others close to the university. Now the Stanford Institute for Human-Centered Artificial Intelligence is celebrating its second year bridging multidisciplinary perspectives, funding important AI research, partnering with policymakers, and working with industry and nonprofits on the human impact of technology. (Learn more in HAI’s first annual report.) 

Graduate School of Business Dean Jon Levin was one of the organization’s earliest supporters and helped make that dinner table conversation a reality. He worked on HAI’s long-range planning proposal and was part of the original oversight group of deans at launch. At his direction, the GSB also contributed to funding HAI’s first round of seed grants. 

Here Levin discusses why Stanford needs to focus on AI, what he hopes for HAI’s future, and why we need this multidisciplinary approach.

How did you get involved with HAI?

When Stanford initiated the long-range planning process, we discussed areas important enough to span the entire university. Fei-Fei Li lives near me, and she was coming over for tea in the evenings to talk about AI and also the rise of China. Meanwhile, I had dinner with King Philanthropies’ Bob King during the launch of Stanford SEED in Chennai, and he was both excited by AI and concerned with its societal implications, particularly for the labor force. Bob quickly assembled a group of interesting thinkers on the topic, and then ultimately Bob and [Stanford trustee] Steve Denning hosted a series of dinners further exploring what could be established uniquely at Stanford, involving a broad range of people across Stanford faculty, industry, government, and other leaders in the field. Eventually these internal and external forces came together, and that led to HAI’s launch.

Why did you see a need for this kind of institute?

When we started thinking about what would eventually become HAI, artificial intelligence was just getting onto people’s radar screens. The last decade has seen tremendous work in machine learning, neural networks, and computation with large datasets. Stanford faculty have been right at the forefront, consistent with our long history of leadership in AI. 

The recent breakthroughs triggered the realization, first in the tech industry and then more broadly, that AI was going to be huge. It would transform industries, and it had the potential to affect society in much broader ways, disrupting or even replacing many jobs. We’ve already seen a rise in inequality due to technological change that rewards high skills, and AI could trigger another wave of that. In worst-case scenarios, we could see persistent unemployment if people’s jobs are automated and we don’t create new ones. 

Beyond the labor force, we saw that the way we were interacting with information on the internet was changing. We noticed how the way algorithms are optimized affects the kind of information we see. We saw concerns about bias and how algorithms impact populations differently.

It was clear that this was not a topic that should be left to just AI researchers. This topic needed social scientists to think about implications, ethicists to think about ethical frameworks. It would need people from the business school perspective who could think about how businesses will handle these decisions and deploy technology responsibly, people in medicine, people in law who think about regulatory aspects. Fei-Fei and John [Etchemendy] settled on the term “human-centered” AI to capture the idea that all aspects of humanity and society needed to be at the center of AI discussion.

Then the question was, how could Stanford assume a leadership role? Stanford has a track record of success creating interdisciplinary institutes to bring people together across the campus. That helped shape the thinking for HAI.

Were there any early proof points?

One of the early activities during the planning process was to create a set of seed grants to offer to faculty interested in studying the development, application, or regulation of AI. The idea was partly to support research, but also partly to gauge faculty interest – who were the faculty, where were they across the university, what types of questions were they thinking about? Close to a hundred proposals came in, which exceeded all expectations. That was a strong indication that faculty were interested. The other thing that came across was the breadth of interest: The awardees were from all seven schools. That was just a really good signal that we were onto something.

What issues are of mutual research interest for HAI and the GSB?

AI, machine learning, and all the related technology tools are permeating through business as well as in research methods. In research, our faculty are using different types of machine learning, natural language processing, and other methods across the GSB’s disciplines, from our finance group to economics, marketing, organizational behavior. 

For businesses, AI is becoming software today. It’s embedded in so many things. If you’re running a transportation business, it’s embedded in your logistics. If you’re running an advertising business, it’s embedded in your digital strategy. If you’re running a website, it’s part of the way you structure the content. Whatever industry we look at, we can examine the effect of this technology on organizational design and strategy, regulation, and implications for organizations in society.

What do you want your MBA students to learn?

First, if we think of AI as a new suite of technological tools and an enabler for different business models, how does that change the set of skills that you need to be an effective manager and organizational leader? They need to understand how data gets used, deployed, and how business models can utilize it. That’s become an essential skill in the same way that critical thinking is an essential skill. 

Then there’s another piece: We’re seeing so much in Silicon Valley over the past few years how important it is to think beyond the horizon about the effects of deploying technology. It’s incumbent on us, particularly at Stanford, to educate students to think responsibly and ethically about technology. We need to reinforce the connection between development and deployment of technology and the societal implications of that deployment. 

What do you hope for the next two years from HAI?

I’m excited about how quickly HAI has grown from being a dinner table idea to an institute already recognized globally. Fei-Fei and John and the faculty and staff at HAI have accomplished an amazing amount. I hope in the next two years to continue the momentum, and I would love to see more of our GSB students and faculty get involved. We’ve had some wonderful joint programs – we currently run an executive program together, and I would like to see more of those collaborations. 

One of the aspirations was not only to support our own community but to have national impact on the way the country thinks about, supports, and regulates AI. A great example is the bill that passed recently to create a national cloud computing infrastructure – John and Fei-Fei and members of the HAI advisory board were instrumental in that discussion. I think there’s an opportunity now, particularly with the elevation of science under the new federal administration, for HAI to deliver on that promise in bigger ways. 

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content

HAI Logo extra dark plum background

Announcing the HAI Policy Brief Series

by John Etchemendy and Fei-Fei Li
September 24th, 2020

Dear HAI Community,  We started the Stanford Institute for Human-Centered Artificial Intelligence (HAI) because...

HAI Industry Brief - March 2021

Announcing HAI Industry Briefs

by John Etchemendy and Fei-Fei Li
March 12th, 2021

This new series will connect companies to cutting-edge Stanford AI research.