Human-Centered Artificial Intelligence (HAI) is an approach to AI that prioritizes human needs, values, and well-being throughout the development and deployment of AI systems. HAI prioritizes collaboration between humans and machines, ensuring that technology is developed with empathy, ethics, and user experience in mind. HAI integrates insights from fields like computer science, psychology, ethics, and design to build systems that are trustworthy, inclusive, and aligned with societal goals.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:
Responsible AI | Ethical AI | Human-Computer Interaction (HCI)
In his talk, "AI For Good” Isn’t Good Enough: A Call for Human-Centered AI", Professor James Landay elaborates on his argument for an authentic Human-Centered AI.
In his talk, "AI For Good” Isn’t Good Enough: A Call for Human-Centered AI", Professor James Landay elaborates on his argument for an authentic Human-Centered AI.

In its fifth year, HAI catalyzed a multidisciplinary community of researchers, industry, policy, and civil society to ensure artificial intelligence prioritizes humans.
In its fifth year, HAI catalyzed a multidisciplinary community of researchers, industry, policy, and civil society to ensure artificial intelligence prioritizes humans.

Watch Tanner Lecturers, Fei-Fei Li and Eric Horvitz, discuss the topics of AI and Human Values.
Watch Tanner Lecturers, Fei-Fei Li and Eric Horvitz, discuss the topics of AI and Human Values.

“Leaky abstractions” are transforming software design and programming and leading to better, human-focused technology.
“Leaky abstractions” are transforming software design and programming and leading to better, human-focused technology.


Stanford HAI’s upcoming conference challenges attendees to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.
Stanford HAI’s upcoming conference challenges attendees to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.


At HAI’s fall conference, speakers define what human-centered design looks like, challenge our current metrics of success, and call for “productive discomfort.”
At HAI’s fall conference, speakers define what human-centered design looks like, challenge our current metrics of success, and call for “productive discomfort.”

In this podcast, HAI Executive Director Russell Wald explores how universities, policymakers, and industry must collaborate to keep AI human-centered. Wald shares takeaways from the AI Index, explains how China is narrowing the performance gap, and outlines why academic institutions are vital to ethical AI leadership.
In this podcast, HAI Executive Director Russell Wald explores how universities, policymakers, and industry must collaborate to keep AI human-centered. Wald shares takeaways from the AI Index, explains how China is narrowing the performance gap, and outlines why academic institutions are vital to ethical AI leadership.