Embedding the Human in AI Research | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventWorkshop

Embedding the Human in AI Research

Status
Past
Date
Friday, October 18, 2019 12:00 AM - 3:00 PM PST/PDT
Topics
Ethics, Equity, Inclusion

Faculty Leaders: Jeff Hancock, Angèle Christin, Gaby Harari, and Londa Schiebinger

 How can we integrate the human into work on artificial intelligence? How can we best define “human-centered”? Can HAI develop a mechanism that facilitates collaboration across disciplines to promote human-centered AI? These were some of the central questions that brought together 15 Stanford faculty members and researchers from the social sciences, humanities, and computer science for the “Embedding the Human in AI Research” workshop. As ethical AI guidelines are springing up, central questions of human-centeredness and effective collaboration remain open. Between 2011 and 2018, 84 ethical statements appeared globally, with 88% released after 2016 (Jobin, Ienca, & Vayena, Artificial Intelligence: the global landscape of ethics guidelines. Nature Machine Intelligence, 2019). Jobin et al., found that top topics of interest included: transparency, justice & fairness, non-maleficence, responsibility, and privacy. Not well represented was sustainability, defined as deploying AI to help protect the environment, improve the planet’s ecosystem, and promote peace. How do we put such ethical aspirations into action in HAI research? Can we develop a mechanism mechanism for HAI by which social scientists/humanists and technical people collaborate from the VERY BEGINNING when setting research priorities and formulating research questions?  Overall: There was excellent discussion. A number of participants were new faculty at Stanford. They express concerns about time spent on interdisciplinary work, but were intrigued and pleased to be invited. Participants raised questions about how cultural and structural approaches can better be integrated into AI research. While there is growing attention to ethics within technology, ethics is very individualized, despite the fact that inequalities and biases can be systematic.
Share
Link copied to clipboard!

Related Events

Juan Sebastián Gómez-Cañón | Challenges And Opportunities For Human-Centered Music Emotion Recognition
SeminarJun 03, 202612:00 PM - 1:15 PM
June
03
2026

Music is intertwined with human emotion, memory, and identity, making it a powerful medium for affective experience and regulation.

Seminar

Juan Sebastián Gómez-Cañón | Challenges And Opportunities For Human-Centered Music Emotion Recognition

Jun 03, 202612:00 PM - 1:15 PM

Music is intertwined with human emotion, memory, and identity, making it a powerful medium for affective experience and regulation.

AI+Science: Accelerating Discovery
ConferenceMay 05, 20268:30 AM - 6:45 PM
May
05
2026

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery.

Conference

AI+Science: Accelerating Discovery

May 05, 20268:30 AM - 6:45 PM

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery.

Wolfgang Lehrach | Code World Models for General Game Playing
SeminarMay 13, 202612:00 PM - 1:15 PM
May
13
2026

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

Seminar

Wolfgang Lehrach | Code World Models for General Game Playing

May 13, 202612:00 PM - 1:15 PM

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

How can we integrate the human into work on artificial intelligence? How can we best define “human-centered”? Can HAI develop a mechanism that facilitates collaboration across disciplines to promote human-centered AI? These were some of the central questions that brought together 15 Stanford faculty members and researchers from the social sciences, humanities, and computer science for the “Embedding the Human in AI Research” workshop. 

As ethical AI guidelines are springing up, central questions of human-centeredness and effective collaboration remain open. 

Between 2011 and 2018, 84 ethical statements appeared globally, with 88% released after 2016 (Jobin, Ienca, & Vayena, Artificial Intelligence: the global landscape of ethics guidelines. Nature Machine Intelligence, 2019). Jobin et al., found that top topics of interest included: transparency, justice & fairness, non-maleficence, responsibility, and privacy. Not well represented was sustainability, defined as deploying AI to help protect the environment, improve the planet’s ecosystem, and promote peace. How do we put such ethical aspirations into action in HAI research? Can we develop a mechanism mechanism for HAI by which social scientists/humanists and technical people collaborate from the VERY BEGINNING when setting research priorities and formulating research questions? 

Overall: There was excellent discussion. A number of participants were new faculty at Stanford. They express concerns about time spent on interdisciplinary work, but were intrigued and pleased to be invited. Participants raised questions about how cultural and structural approaches can better be integrated into AI research. While there is growing attention to ethics within technology, ethics is very individualized, despite the fact that inequalities and biases can be systematic.

Jeffrey Hancock
Harry and Norman Chandler Professor of Communication
Angèle Christin
Associate Professor of Communication, and, by courtesy, of Sociology, Stanford University | Senior Fellow, Stanford HAI
Gabriella Harari
Assistant Professor of Communication
Londa Schiebinger
John L. Hinds Professor of the History of Science, Stanford University