Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Stanford’s New Institute Will Unite Humanities and Computer Science to Study, Guide and Develop Human-Centric Artificial Intelligence | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Stanford’s New Institute Will Unite Humanities and Computer Science to Study, Guide and Develop Human-Centric Artificial Intelligence

Date
March 24, 2019
Your browser does not support the video tag.

 Sixty-four years after John McCarthy coined the term “artificial intelligence,” Stanford university has launched an initiative to bring a focus on humanity’s most pressing problems to the study and practice of AI.“The creators of AI need to represent humanity,” said Fei-Fei Li, co-director of the newly formed Stanford Institute for Human-Centered Artificial Intelligence. “This requires a true diversity of thought across gender, age, and ethnicity and cultural background as well as a diverse representation from different disciplines.”Li, a computer scientist, AI pioneer, and former Google vice president, spoke Monday to an audience of more than 900 as the university introduced HAI, as the institute will be called. HAI, she said, will tap the expertise of nearly every department in the university, including engineering, robotics, statistics, philosophy, economics, anthropology, and law, and aims to influence policymakers as it develops new technologies and applications.HAI, Etchemendy said, will be guided by three core principles:A bet that the future of AI will be inspired by our understanding of human intelligence. The technology has to be guided by our understanding of how it is impacting human society. AI applications should be designed to enhance and augment what humans can do.Stanford President Marc Tessier-Lavigne touched on the dual-nature of artificial intelligence, saying: “AI has the potential to change society for the better in so many ways; from promising medical applications to vastly safer cars.“But the advance of AI carries risk, from job insecurity to the influence of AI-generated content on social media to the potential for bias in machine learning. And now is the moment to ensure that we are embarking along a path to develop technology that will serve, augment, and complement humanity, not replace or divide it.”The institute plans to hire at least 20 new faculty members from across fields spanning humanities, engineering, medicine, the arts, and the basic sciences, with a particular interest in those whose work contributes to HAI’s mission. Approximately 200 faculty members have already signed on to devote at least part of their time to HAI.Microsoft’s Bill Gates spoke at the launch event, along with Reid Hoffman, the co-founder of LinkedIn, Demis Hassabis, co-founder of DeepMind, and Eric Horvitz of Microsoft Research.HAI’s advisory council include Reid Hoffman, who will be the chair; Jim Breyer of Breyer Capital and Accel Partners; former Yahoo CEO Marissa Mayer; Yahoo co-founder Jerry Yang; former IBM CEO Sam Palmisano and Google’s Eric Schmidt.The institute will be open to researchers from other universities, policymakers, journalists and leaders of corporations. Nonprofits can work with HAI’s faculty to develop new solutions using AI, such as helping emergency room doctors make better decisions about patient care under stressful conditions.Battling algorithmic biasAlthough HAI’s official launch was March 18, the institute has already provided support to roughly 50 interdisciplinary research teams, including a project to assist the resettlement of refugees; a system to improve healthcare delivery in hospital intensive care units; and a study of the impact of autonomous vehicles on social governance and infrastructure. HAI Faculty members are now conducting research on topics related to the impact of AI and related technologies on society. Robert Reich, the former Secretary of Labor in the Clinton administration, said “the rapid advance of AI and the quest for artificial general intelligence raise profound ethical, political, and social questions.” Reich said he is working to integrate the research and teaching efforts of engineers, social scientists, and humanists.Susan Athey, who studies the economics of technology, said that it might be difficult to foresee the risks of a particular algorithm, such as the use of AI to score loan applications.Indeed, bias that is inadvertently programmed into certain algorithms is already a serious issue. There are facial recognition programs that can’t distinguish one black man from another, voice recognition applications that can’t understand anything but standard American English, and programs that search for candidates to fill particular jobs sometimes default to males. Kate Crawford, an NYU and Microsoft researcher who spoke at the launch event, co-authored a recent study which found that the algorithms increasingly used in police work are badly flawed.  “They are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices,” she wrote. The results of the study were shocking, Crawford said. “We have no review processes in AI similar to what we have in social science.”The lack of oversight in AI research raises troubling questions, Crawford said. “Who is the ‘we’ that is responsible? Which communities will be represented?” It becomes a question of power, and it is not at all clear that the private sector can regulate as AI becomes more pervasive, Crawford said. As powerful as it is, AI still lacks the learning ability of children. An AI system can analyze vast amounts of data, but has difficulty generalizing from small amounts of data, something that children are actually quite good at, Alison Gopnik, a UC Berkley researcher, said during a panel discussion at the launch. Nor can AI systems go out on their own to gather data needed to understand a problem, she said. Social learning, the ability to draw conclusions based on interactions with others, is a key to human learning -- but machines can’t do it.Understanding how children learn will offer researchers a path to making artificial intelligence systems more intelligent, said Gopnik, who studies children’s cognitive behavior.After a closing keynote by California Gov. Gavin Newsom, Professor Li ended the event, saying “I hope what we are creating here is a global hub and forum for this kind of ongoing conversation.”

Share
Link copied to clipboard!
Contributor(s)
Bill Snyder

Related News

What Your Phone Knows Could Help Scientists Understand Your Health
Katharine Miller
Mar 04, 2026
News
Woman using social media microblogging app on her smart phone

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

News
Woman using social media microblogging app on her smart phone

What Your Phone Knows Could Help Scientists Understand Your Health

Katharine Miller
HealthcareMar 04

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

News

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform

Dylan Walsh
Computer VisionHealthcareSciences (Social, Health, Biological, Physical)Machine LearningMar 03

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems
Nikki Goth Itoi
Feb 27, 2026
News

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

News

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

Nikki Goth Itoi
Generative AIHealthcarePrivacy, Safety, SecurityComputer VisionSciences (Social, Health, Biological, Physical)Feb 27

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.