Skip to main content Skip to secondary navigation
Page Content

Introducing Stanford's Human-Centered AI Initiative

A common goal for the brightest minds from Stanford and beyond: putting humanity at the center of AI.

Humanity: The Next Frontier in AI

We have arrived at a truly historic turning point: Society is being reshaped by technology faster and more profoundly than ever before. Many are calling it the fourth industrial revolution, driven by technologies ranging from 5G wireless to 3D printing to the Internet of Things. But increasingly, the most disruptive changes can be traced to the emergence of Artificial Intelligence.

Many of these changes are inspiring. Machine translation is making it easier for ideas to cross language barriers; computer vision is making medical diagnoses more accurate; and driver-assist features have made cars safer. Other changes are more worrisome: Millions face job insecurity as automation rapidly evolves; AI-generated content makes it increasingly difficult to tell fact from fiction; and recent examples of bias in machine learning have shown us how easily our technology can amplify prejudice and inequality.

Like any powerful tool, AI promises risk and reward in equal measure. But unlike most “dual-use” technologies, such as nuclear energy and biotech, the development and use of AI is a decentralized, global phenomenon with a relatively low barrier to entry. We can’t control something so diffuse, but there is much we can do to guide it responsibly. This is why the next frontier in AI cannot simply be technological—it must be humanistic as well.

The Stanford Human-Centered AI Initiative (HAI)

Many causes warrant our concern, from climate change to poverty, but there is something especially salient about AI: Although the full scope of its impact is a matter of uncertainty, it remains well within our collective power to shape it. That’s why Stanford University is announcing a major new initiative to create an institute dedicated to guiding the future of AI. It will support the necessary breadth of research across disciplines; foster a global dialogue among academia, industry, government, and civil society; and encourage responsible leadership in all sectors. We call this perspective Human-Centered AI, and it flows from three simple but powerful ideas:

  1. For AI to better serve our needs, it must incorporate more of the versatility, nuance, and depth of the human intellect.
  2. The development of AI should be paired with an ongoing study of its impact on human society, and guided accordingly.
  3. The ultimate purpose of AI should be to enhance our humanity, not diminish or replace it.

Realizing these goals will be among the greatest challenges of our time. Each presents complex technical challenges and will provoke dialogues among engineers, social scientists, and humanists. But this raises some important questions: What are the most pressing problems, who will solve these problems, and where will these dialogues take place?

Human-Centered AI requires a broad, multidisciplinary effort that taps the expertise of an extraordinary range of disciplines, from neuroscience to ethics. Meeting this challenge will require us to take chances exploring uncertain new terrain with no promise of a commercial product. It is far more than an engineering task.

The Essential Role of Academia

This is the domain of pure research. It’s the scientific freedom that allowed hundreds of universities to collaborate internationally to build the Large Hadron Collider—not to make our phones cheaper or our Wi-Fi faster, but to catch the first glimpse of the Higgs boson. It’s how we built the Hubble Telescope and mapped the human genome. Best of all, it’s inclusive; rather than compete for market share, it invites us to work together for the benefit of deeper understanding and knowledge that can be shared.

 

Even more important, academia is charged with educating the leaders and practitioners of tomorrow across a range of disciplines. The evolution of AI will be a multigenerational journey, and now is the time to instill human-centered values in the technologists, engineers, entrepreneurs, and policy makers who will chart its course in the years to come.

Why Stanford?

Realizing the goals of Human-Centered AI will require cooperation between academia, industry, and governments around the world. No single university will provide all the answers; no single company will define the standards; no single nation will control the technology.

Still, there is a need for a focal point, a center specifically devoted to the principles of Human-Centered AI, capable of rapidly advancing the research frontier and acting as a global clearinghouse for ideas from other universities, industries, and governments. We believe that Stanford is uniquely suited to play this role.

Stanford has been at the forefront of AI since John McCarthy founded the Stanford AI Lab (SAIL) in 1963. McCarthy coined the term “Artificial Intelligence” and set the agenda for much of the early work in the field. In the decades since, SAIL has served as the backdrop for many of AI’s greatest milestones, from pioneering work in expert systems to the first driverless car to navigate the 130-mile DARPA Grand Challenge. SAIL was the home of seminal work in computer vision and the birthplace of ImageNet, which demonstrated the transformative power of large-scale datasets on neural network algorithms. This tradition continues today, with active research by more than 100 doctoral students, as well as many master’s students and undergraduates. Research topics include computer vision, natural language processing, advanced robotics, and computational genomics.


But guiding the future of AI requires expertise far beyond engineering. In fact, the development of Human-Centered AI will draw on nearly every intellectual domain—and this is precisely what makes Stanford the ideal environment to enable it. The Stanford Law School, consistently regarded as one of the world’s most prestigious, brings top legal minds to the debate about the future of ethics and regulation in AI. Stanford’s social science and humanities departments, also among the strongest in the world, bring an understanding of the economic, sociological, political, and ethical implications of AI. Stanford’s Schools of Medicine, Education, and Business will help explore how intelligent machines can best serve the needs of patients, students, and industry. Stanford’s rich tradition of leadership across the disciplinary spectrum will allow us to chart the future of AI around human needs and interests.

Finally, Stanford’s location—both in the heart of Silicon Valley and on the Pacific Rim—places it in close proximity to many of the companies leading the commercial revolution in AI. With deeper roots in Silicon Valley than any other institution, Stanford can both learn from and share insights with the companies most capable of influencing that revolution.

With the Human-Centered AI Initiative, Stanford aspires to become home to a vibrant coalition of thinkers working together to make a greater impact than would be possible on their own. This effort will be organized around five interrelated goals:

  • Catalyze breakthrough, multidisciplinary research.
  • Foster a robust, global ecosystem.
  • Educate and train AI leaders in academia, industry, government, and civil society.
  • Promote real-world actions and policies.
  • And, perhaps most important, stimulate a global dialogue on Human-Centered AI.

In Closing

For decades AI was an academic niche. Then, over just a few years, it emerged as a powerful tool capable of reshaping entire industries. Now the time has come to transform it into something even greater: a force for good. With the right guidance, intelligent machines can bring life-saving diagnostics to the developing world, provide new educational opportunities in underserved communities, and even help us keep a more vigilant eye on the health of the environment. The Stanford Human-Centered AI Initiative is a large-scale effort to make these visions, and many more, a reality. We hope you’ll join us.