Skip to content Skip to navigation

Embedding the Human in AI Research

October 18, 2019 (All day)
How can we integrate the human into work on artificial intelligence? How can we best define “human-centered”? Can HAI develop a mechanism that facilitates collaboration across disciplines to promote human-centered AI? These were some of the central questions that brought together 15 Stanford faculty members and researchers from the social sciences, humanities, and computer science for the “Embedding the Human in AI Research” workshop.
As ethical AI guidelines are springing up, central questions of human-centeredness and effective collaboration remain open.
Between 2011 and 2018, 84 ethical statements appeared globally, with 88% released after 2016 (Jobin, Ienca, & Vayena, Artificial Intelligence: the global landscape of ethics guidelines. Nature Machine Intelligence, 2019). Jobin et al., found that top topics of interest included: transparency, justice & fairness, non-maleficence, responsibility, and privacy. Not well represented was sustainability, defined as deploying AI to help protect the environment, improve the planet’s ecosystem, and promote peace. How do we put such ethical aspirations into action in HAI research? Can we develop a mechanism mechanism for HAI by which social scientists/humanists and technical people collaborate from the VERY BEGINNING when setting research priorities and formulating research questions? 
Overall: There was excellent discussion. A number of participants were new faculty at Stanford. They express concerns about time spent on interdisciplinary work, but were intrigued and pleased to be invited. Participants raised questions about how cultural and structural approaches can better be integrated into AI research. While there is growing attention to ethics within technology, ethics is very individualized, despite the fact that inequalities and biases can be systematic.