Skip to main content Skip to secondary navigation
Page Content

Leadership

Denning Co-Directors

Vice-Director and Faculty Director of Research

Associate Directors

Michele Elam

Michele Elam

William Robertson Coe Professor of Humanities, Department of English, Center for Comparative Studies in Race & Ethnicity

"The arts and humanities are key to realizing HAI’s goal of human-centered AI, in which prioritizing issues of equity, ethics, and social impact at the very outset of a technology’s design and throughout its use is essential."

Surya Ganguli

Surya Ganguli

Associate Professor of Applied Physics, and by courtesy, of Neurobiology, of Electrical Engineering, and of Computer Science, Stanford University

"We want to solve the mysteries of biological intelligence and create next-generation artificial intelligence that empowers humanity."

Daniel E. Ho

Daniel E. Ho

William Benjamin Scott and Luna M. Scott Professor of Law; Professor of Political Science; Senior Fellow, SIEPR; Associate Director, HAI; Faculty Fellow, CASBS; Faculty Director, Stanford RegLab

"To make AI more human-centered, we need to develop partnerships between technologists and subject matter experts to identify the most compelling problems for which AI tools can be designed, piloted, and evaluated."

Curtis Langlotz

Curt Langlotz

Professor of Radiology, Medicine, and Biomedical Data Science, Director of the Center for Artificial Intelligence in Medicine & Imaging, and Associate Director of Stanford HAI

"The latest AI science presents a tremendous opportunity to improve human health. Stanford HAI engages experts across the University to assure biomedical AI systems are developed safely, fairly, and ethically."

Christopher Manning

Christopher Manning

Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science, Stanford University

"As AI systems start to be used in and affect the world, one of Stanford HAI’s biggest campus roles is broadening the pursuit of the technology and considering the economic context, values, biases, and dangers that might be possible in these systems."