Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
HAI Welcomes New Associate Directors | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
newsAnnouncement

HAI Welcomes New Associate Directors

Date
December 01, 2019
Topics
Arts, Humanities
Privacy, Safety, Security
Human Reasoning
Your browser does not support the video tag.

Additional faculty director appointments demonstrate Institute’s commitment to focusing on the human impact of artificial intelligence. The Stanford Institute for Human-Centered Artificial Intelligence has added two new faculty Associate Directors to its leadership team: Professors Michele Elam and Daniel Ho. Elam is the William Robertson Coe Professor of Humanities in the English Department and the Olivier Nomellini Family University Fellow in Undergraduate Education.  Affiliated with Center for Comparative Studies in Race & Ethnicity, she is also on the Advisory Boards of Stanford’s Program in African & African American Studies and the Program in Feminist, Gender and Sexuality Studies, and serves on the Director’s Council for the Hasso Plattner Institute of Design (the d.school). Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, a Senior Fellow at the Stanford Institute for Economic Policy Research, and a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS). “We are thrilled that Michele and Dan have agreed to take leadership roles within HAI,” said HAI Co-Director John Etchemendy.  “Their backgrounds in the humanities, social science, and law, as well as their history of interdisciplinary collaboration make them ideal choices to support HAI’s mission of advancing AI research, education, policy, and practice to improve the human condition.” “Even as we maintain our emphasis on foundational research into the next generation of AI technologies and applications that augment human capabilities, the profound impact AI will have on people and society calls for broader and deeper collaboration across disciplines than we have seen in any previous technological revolution,” said Co-Director Fei-Fei Li.  “Michele and Dan will bring important new perspectives and leadership into this discussion.” Elam’s research in interdisciplinary humanities connects literature with the social sciences in order to examine changing cultural interpretations of gender and race. Her work is informed by the understanding that racial perception in particular impacts outcomes for health, wealth and social justice. Elam’s scholarly background will help expand HAI’s engagement with humanities and arts--especially literature, film, theater, visual & graphic arts--particularly around issues of equity. “Cultural narratives shape the public imagination about emerging technologies, and storytelling impacts, implicitly or explicitly, everything from product design to public policy,” Elam said.  “I am excited to bring the study of the arts - both those engaging with and generated by AI technologies - to advance our understanding of the ‘human’ in human-centered AI. I am so honored and excited to join this truly interdisciplinary team.” Ho’s scholarship centers on quantitative empirical legal studies, with a substantive focus on administrative law and regulatory policy, antidiscrimination law, and courts.  He is also the Director of the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, which partners with government agencies to design and evaluate programs, policies, and technologies that modernize governance.  With his background in policy, regulation, and government use of AI, Ho will have a particular focus on promoting HAI’s engagement with local, national, and global policymakers and governments. “AI is one of the defining challenges of the next generation of law and governance,” Ho said.  “I’m thrilled to be joining the HAI team to work on these critical questions at the intersection of law and technology.” As Associate Directors, Michele and Dan will continue HAI’s work to engage the broadest possible community of faculty and students across Stanford, contribute to the development and execution of HAI’s research and programs, and lead engagement with external stakeholders, including civil society, government, and industry related to their areas of expertise.

Share
Link copied to clipboard!
Contributor(s)
HAI Staff

Related News

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images
PBS NewsHour
Jan 16, 2026
Media Mention

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Media Mention
Your browser does not support the video tag.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images

PBS NewsHour
Privacy, Safety, SecurityRegulation, Policy, GovernanceEthics, Equity, InclusionJan 16

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

How AI Shook The World In 2025 And What Comes Next
CNN Business
Dec 30, 2025
Media Mention

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Media Mention
Your browser does not support the video tag.

How AI Shook The World In 2025 And What Comes Next

CNN Business
Industry, InnovationHuman ReasoningEnergy, EnvironmentDesign, Human-Computer InteractionGenerative AIWorkforce, LaborEconomy, MarketsDec 30

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.