Skip to main content Skip to secondary navigation
Page Content

Stanford HAI at Five: Pioneering the Future of Human-Centered AI

In its fifth year, HAI catalyzed a multidisciplinary community of researchers, industry, policy, and civil society to ensure artificial intelligence prioritizes humans.

Image
HAI cofounders John Etchemendy, Chris Manning, Fei-Fei Li, and James Landay

Stanford HAI co-founders John Etchemendy, Chris Manning, Fei-Fei Li, and James Landay

During her sabbatical at Google as Cloud AI chief scientist, Fei-Fei Li witnessed the rapid integration of artificial intelligence into industry: From Japanese cucumber farmers to insurance firms and energy conglomerates, AI was reshaping traditional practices.

Recognizing the profound impact of AI, Li envisioned a future where technology serves humanity with fairness and dignity. With Stanford's rich history in AI research and its leadership in both technology and humanities, she knew she could spearhead a new effort here.

Now entering its fifth year, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) has made significant strides toward Li’s ambitious vision. By fostering a diverse interdisciplinary research community, channeling substantial funding into AI research, establishing specialized centers, and collaborating with policymakers and industry leaders, HAI has positioned itself as a trailblazer in shaping the ethical and inclusive development of AI.

“HAI was the first institution in the public sector that was set up devoted not only to the innovation of this cutting-edge technology, but also to engaging policy, industry, and civil society to ensure this technology is developed with humans at the center,” Li said.

”We’ve taken major strides in policy and thought leadership,” said Co-director John Etchemendy, former provost and one of HAI’s original founders, “and we have big steps ahead.”

Novel Research

Stanford HAI has funneled more than $40 million into human-centered AI research, supporting over 300 Stanford scholars across disciplines. 

Those researchers built AI teaching assistants, partnered with governments to prevent human traffickingimproved refugee settlement through machine learning, and developed more sustainable ways for mining companies to find minerals. 

“Our research is already having real-world impact,” said HAI Research Programs Director Vanessa Parli. “From robotics to healthcare, technical algorithms, cognitive science, applications in social media, education, and more, we’re pushing the edges of what AI can do while keeping humans at the forefront.”

Interdisciplinary Education

In addition to financial support, Stanford HAI has prioritized building a multidisciplinary education community. Initiatives such as a human-centered AI track in the Symbolic Systems major, student affinity groups, a graduate fellows program, and tech and policy fellowships aim to nurture the next generation of AI leaders. Workshops and conferences draw multidisciplinary faculty and build diverse new research teams.

“We saw that AI would be a technology that would affect all areas,” said Associate Director Chris Manning, who helped develop the early concept of Stanford HAI. “It would affect medicine, law, business, economics. There are lots of ethical and other philosophical issues. We needed a ‘whole of the university’ approach. Stanford HAI has really enabled a lot of those broader collaborations across the university.”

Forefront of Major AI Shiftsa timeline of key milestones at HAI from 2019 to 2023

Recognizing emerging trends in AI, Stanford HAI swiftly launched two new centers: The Stanford Digital Economy Lab (DEL) and the Center for Research on Foundation Models (CRFM).

DEL explores AI’s impact on the economy and nature of work. As AI’s capabilities exponentially grow, the human side — skills, organizations, institutes — hasn’t kept pace, said DEL Director Erik Brynjolfsson: “In that growing gap is where I think a large fraction of our society's challenges and opportunities lie. DEL is focused on addressing that gap.”

Started in 2020, DEL’s work includes understanding AI’s productivity implications and what it means for growth, jobs, incomes, and inequality. DEL also studies companies employing AI (see findings of one call center project). 

CRFM launched in 2021 to understand and shape the development of foundation models. “We thought that this technological revolution was not just something that should be done by technologists or people in companies, but that Stanford had a major role to play in shaping the way that things went,” said CRFM Director Percy Liang.

The center focuses both on technical advances and societal impact. CRFM launched with essential reading on foundation models’ opportunities and challenges; developed an evaluation framework called HELM that has analyzed 30+ foundation models; graded AI model companies on transparency; built valuable new datasets in robotics and law; and led technical advances that changed how major technology companies build AI.

Educating Lawmakers

Stanford HAI's policy efforts are centered on promoting informed governance and regulation of AI. Through scholarly research, policy discussions, and engagements with governmental bodies, the institute’s faculty have played a pivotal role in shaping AI policy at the state, national, and global level. Notable engagements include testimonies before various Senate and House committees, meetings with agencies including the U.S. Commerce and the Federal Trade Association, and a meeting of Stanford HAI Co-director Li and Senior Fellow Rob Reich with President Biden.

Stanford HAI also launched a policy boot camp — an intensive three-day educational event to help congressional aides better understand the technology and its wide impacts across industries. 

“This field is moving so quickly that it’s easy for policy analysts and others to get lost in the details,” said Russell Wald, Stanford HAI deputy director. “We wanted policy experts to learn from the nation’s top AI experts, understand the technology, and really appreciate its risks and rewards.”

A key piece of legislation that Stanford HAI supported from day one is a national AI research resource. Etchemendy and Li saw the imbalance of power in AI — limited to the companies that own massive datasets and can afford the expensive compute. To provide a counterweight, they called for a national AI research resource for academia and nonprofits. Stanford HAI leaders organized universities and tech companies to support this endeavor in 2020, wrote a blueprint for a resource in 2021, and served on key task forces.

Now these efforts are bearing fruit. In 2023, Senators Martin Heinrich, Todd Young, Corey Booker, and Mike Rounds introduced the CREATE AI Act, while this year the National Science Foundation launched a pilot of the program.

“These moves will rebalance the AI ecosystem and ensure AI is created not just for profit but for the public good,” noted Wald, one of the NAIRR’s early champions.

Building Connections with Industry

Recognizing the need for collaboration between academia and industry, Stanford HAI established its industry affiliate program. 

“This is a chance for us to learn from the actual problems that these corporate partners face in their businesses so that the kind of research we're doing has a real-world impact,” said Stanford HAI Co-director James Landay, an institute founder. “It's also a chance for us to influence them on what human-centered AI should be so that they start to practice something closer to that model in their own work over time.”

Stanford HAI works with companies from every sector, from traditional technology to retail, banking, consulting, and more. Affiliates’ employees work directly with faculty, attend workshops, participate in executive education, and even come to campus as visiting scholars. The program has created more than 50 research collaborations and delivered $10 million in research grants and $9 million in cloud computing credits to Stanford scholars. 

“Our goal with the program is to become a catalyst for both companies and researchers globally as they develop and deploy AI in a way that benefits the world, with human-centered values at the technology’s core,” said Panos Madamopoulos-Moraris, Stanford HAI’s Managing Director for Industry Programs and Partnerships. 

More to Come

In just five years, Stanford HAI has surpassed its founders’ expectations, making significant impacts in policy, research, and education. However, the future of AI scholarship may transcend the confines of Stanford University. 

"The science has outgrown the university model of research," Etchemendy remarked. As AI models become larger and more complex, scholars need to seek resources beyond traditional university settings to remain at the forefront of the field.

He drew parallels with high-energy physics. Just as universities partnered with governmental organizations to build large-scale research facilities such as linear accelerators to accommodate the evolving needs of particle physics, a similar approach is warranted in AI research.

"These large foundation models have outgrown what universities can do," Etchemendy said. While industry has intervened, its focus on commercial applications often fails to prioritize broader knowledge dissemination.

In response, Stanford HAI leaders advocate for a novel approach—a collaborative lab environment funded by philanthropy and government support. Such a setting would not only provide cutting-edge training opportunities for students but also enable scholars to delve deeper into research on these massive AI models.

"This is our next step," Etchemendy said. While current efforts are essential, he said, we need to grow. “By embracing this new element, we can continue our mission to advance AI research for the betterment of society.”



 

Three Standout Projects

Image
AI algorithms often are trained on adult data, which can skew results when evaluating children. A new perspective piece lays out an approach for pediatric populations.

RAISE-Health

In response to rapid advances in AI and the urgent need to define its responsible use in health and medicine, Stanford Medicine and the Stanford HAI launched RAISE-Health (Responsible AI for Safe and Equitable Health) in June 2023. This initiative seeks to address critical ethical and safety issues surrounding AI innovation and to guide stakeholders through this complex and evolving field. Co-led by Stanford School of Medicine Dean Lloyd Minor and Fei-Fei Li, RAISE-Health sets out to elevate clinical care outcomes through AI integration, expedite research to tackle healthcare's most pressing issues, and educate patients, caregivers, and researchers on navigating AI advancements. This spring the initiative will host its first conference.

Image
AI Index graphic

AI Index

The AI Index tracks the progress of this technology through comprehensive data and original analysis. It gauges rapid strides made in research and development, technical performance, ethics considerations, economic impact, education implications, policy and regulation, diversity, public sentiment, and beyond. Armed with this information, policymakers, researchers, journalists, executives, and the general public can better understand the complex world of AI, make informed decisions, and prioritize advances in human-centered AI. 

Image
A volunteer looks at her tablet among other volunteers sorting through boxes.

Social Sector AI

Nonprofits and philanthropic organizations wield significant influence in shaping the trajectory of AI, ensuring it evolves as a force for societal benefit. HAI aims to be a nexus for this sector and technical experts. To this end, HAI initiated a national AI for Social Impact survey and will roll out an education and convening program tailored for nonprofit leaders.