Skip to main content Skip to secondary navigation
Page Content

Opening the Gate

Stanford’s new Institute for Human-Centered Artificial Intelligence aims to fundamentally change the field of AI by integrating a wide range of disciplines and prioritizing true diversity of thought.

It all started in Fei-Fei’s driveway.

It was the summer of 2016.

“John,” she said, “As Stanford’s provost, you’ve led an effort to draw an arrow from technology to the humanities, to help humanists innovate their methodology.”

“It’s time to build another arrow coming back the other direction. It should become a complete feedback loop. We need to bring the humanities and social thinking into tech.”

She went on to explain an epiphany she had recently had — a problem she could no longer ignore. The people building the future all seemed to come from similar backgrounds: math, computer science and engineering. There were not enough philosophers, historians or behavioral scientists influencing new technology. There were very few women or people from underrepresented groups. “The way we educate and promote technology is not inspiring to enough people. So much of the discussion about AI is focused narrowly around engineering and algorithms,” she said. “We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.”

Standing in Fei-Fei’s driveway, John saw the vision clearly. As a mathematical logician, he had been actively following the progress of AI for decades; as a philosopher, he understood the importance of the humanities as a guide to what we create. It was obvious that not only would AI be foundational to the future — its development was suddenly, drastically accelerating.

If guided properly, AI could have a profound, positive impact on people’s lives: It could help mitigate the effects of climate change; aid in the prevention and early detection of disease; make it possible to deliver quality medical care to more people; help us find ways to provide better access to clean water and healthy food; contribute to the development of personalized education; help billions of people out of poverty and help solve many other challenges we face as a society.

We believe AI can and should be collaborative, augmentative, and enhancing to human productivity and the quality of our work and life.

But AI could also exacerbate existing problems, such as income inequality and systemic bias. In the past couple of years, the tech industry has struggled through a dark time. Multiple companies violated the trust and privacy of their customers, communities and employees. Others released products into the world that were not properly safety tested. Some applications of AI turned out to be biased against women and people of color. Still more led to other harmful unintended consequences. Some hoped the technology would replace human workers, not seeing the opportunity to augment them.

That day began a conversation that continued over many months. We discovered that we both had been on a similar quest throughout our careers: to discover how the mind works — Fei-Fei from the perspective of cognitive science and AI, and John from the perspective of philosophy.

Meanwhile, Fei-Fei took off for a sabbatical to Google, where she became Chief Scientist of AI at Google Cloud. During her time there, she saw the massive investments the technology industry was making in AI, and worked with many customers from every industry that are in great need of a digital and AI transformation. She became even more committed to the idea of creating a human-centered AI institute at Stanford.

Our Mission is to advance AI research, education, policy and practice to improve the human condition.

In 2017, Fei-Fei began discussing the future of AI with Marc Tessier-Lavigne, the university’s new president and a neuroscientist. She brought in Stanford Computer Science Professors James Landay, who specializes in human/computer interaction, and Chris Manning, who specializes in machine learning and linguistics, to further develop the idea. When John stepped down as Provost in 2017, Fei-Fei asked him to co-direct the undertaking. Together they brought in Russ Altman, a Stanford Professor of Bioengineering and Data Science; Susan Athey, an Economics of Technology Professor at Stanford Graduate School of Business; Surya Ganguli, a Stanford Professor of Applied Physics and Neurobiology; and Rob Reich, a Stanford Professor of Political Science and Philosophy. Encouraged by Stanford’s school deans, especially Jon Levin, Jennifer Widom and Debra Satz (Business, Engineering and Humanities and Sciences), the new team evangelized the idea with colleagues and friends. Soon dozens of accomplished faculty members were contributing their perspectives.

Nearly three years and many deep conversations later, we are humbled and proud to announce the official launch of The Stanford Institute for Human-Centered Artificial Intelligence (HAI). 

At HAI our work is guided by three principles: that we must study and forecast AI’s Human impact, and guide its development in light of that impact; that AI applications should Augment human capabilities, not replace humans; and that we must develop Intelligence as subtle and nuanced as human intelligence.

Our aim is for Stanford HAI to become an interdisciplinary, global hub for AI learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and influence AI’s impact and potential.

These principles extend the discipline of AI far beyond the confines of engineering. Understanding its impact requires expertise from the humanities and social sciences; mitigating that impact demands insights from economics and education; and guiding it requires scholars of law, policy and ethics. Just so, designing applications to augment human capacities calls for collaborations that reach from engineering to medicine to the arts and design. And creating intelligence with the flexibility, nuance and depth of our own will require inspiration from neuroscience, psychology and cognitive science.  

Stanford HAI leverages the university’s remarkable strengths across virtually every discipline, starting from computer science and AI, but reaching to business, education and law; to economics, sociology and history; to medicine, neuroscience and the biosciences; to philosophy, literature and the arts. Our work will touch every school and every institute across the entire university — and will reach far beyond the borders of our campus.

We intend to achieve this by pursuing four related goals. The first is to catalyze breakthrough multidisciplinary research guided and inspired by the human-centered principles articulated above. The second is to foster a robust global ecosystem around the human-centered perspective, through partnerships with like-minded institutions, regular international symposia, and active outreach. Third, we are launching an ambitious educational program with offerings tailored to executives, policy makers, attorneys, journalists and other professionals, as well as programs that aim to diversify the AI workforce of tomorrow. Finally, and most importantly, we aim to promote real-world action by hosting policy summits and forums focused on the most pressing issues in AI, bringing together key participants from industry, government, academia and civil society.  

Although Stanford HAI’s formal launch is March 18, 2019, we have already made significant progress on each of these goals. Our faculty have initiated roughly 50 HAI-funded research projects on a wide variety of topics. Examples include:

  • Research bridging AI and neuroscience focusing on sensory feedback, biological efficiency and memory

  • Studies of the legal and regulatory implications of a world permeated by AI  

  • Studies of the economic implications of AI and the future of work

  • Research on how AI can help usher in an age of personalized learning

  • Understanding how the diffusion of AI into society has begun to disrupt trust

  • Research on how to detect and correct gender and ethnic bias in AI algorithms

  • Prototyping AI-assisted systems to improve the delivery of healthcare in intensive care units

  • Studying the downstream effects of AI on how clinicians make decisions

  • Improving refugee integration through data-driven algorithmic assignment

  • Developing new machine learning methods that are more data-efficient, generalizable, robust and interpretable

  • Studying the impact of autonomous vehicles on society

Stanford HAI has sponsored multiple symposia bringing together experts on topics including the Future of Work, and AI, Humanities and the Arts. This summer we will be launching our first Executive Education program in partnership with the Graduate School of Business and our first Congressional Bootcamp in partnership with the Freeman-Spogli Institute for International Studies. We are also sponsoring a summer AI research internship program for “graduates” of the AI4All diversity education program, to enable these young people to maintain their interests and hone their skills.  

We are now selecting finalists for three fellowship programs that we have launched in partnership with other Stanford units: HAI Ethics Fellows (with the McCoy Family Center for Ethics in Society), HAI Journalism Fellows (with the John S. Knight Journalism Fellows Program), and HAI International Security Fellows (with the Center for International Security and Cooperation). The winners of these fellowships will join the HAI community in the Fall. We will also soon be announcing the appointments of our first HAI Journalist-in-Residence and our first HAI Artist-in-Residence, both of whom will join us this Fall, as well as our first class of HAI Engineering Fellows.

 

***

It has been quite a journey since that first meeting in Fei-Fei’s driveway three years ago. That meeting happened because John was buying a house adjoining Fei-Fei’s back yard. We became neighbors, and soon after John moved in, he replaced the old fence between our houses. In one spot, connecting Fei-Fei’s back yard to John’s side yard, he built a gate. Little did we know how well-trodden the path between our houses would become thanks to HAI! We hope Stanford HAI will provide the same opportunity for partnership, trust and collaboration to many, many others. Please join us in our quest to improve the human condition through Human-centered AI.  

***

Image
John Etchemendy and Fei Fei Li

More News Topics