Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
On Independence and Influence | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
newsAnnouncement

On Independence and Influence

Date
October 10, 2019
Topics
Privacy, Safety, Security
Human Reasoning
Your browser does not support the video tag.

  In the six months since we launched the Stanford Institute for Human-Centered Artificial Intelligence (HAI), the environment around us has continued to evolve. Public trust in many technology companies, governments, and even academic institutions has been diminished, and in some cases, deservedly so. In recent weeks, we have seen growing attention to the topic of governance and influence at all institutions. At Stanford HAI, we welcome this increased scrutiny, even when it identifies our own shortcomings. As an academic organization whose purpose is to study artificial intelligence and its impact on society, as well as to educate and serve a global community, my colleagues and I believe it is important that we be transparent about our values, methods and motivations. In that spirit, we have published a values statement on our website. Our stated mission at HAI is to advance AI research, education, policy, and practice to improve the human condition. We take that mission seriously. For example, we do not fund applications of AI unless they are aimed, directly or indirectly, at enhancing human wellbeing. More generally, we promote basic and applied AI research that addresses fundamental questions about the technology, or that enables it to interact more flexibly with human users and to accommodate their needs.   In the policy sphere, we make no pretense of having answers to the many difficult questions raised by AI and its increasing impact on society. We are, however, committed to understanding that impact, supporting evidence-based policy research, and convening stakeholders from industry, government, academia, and civil society to discuss and debate how the technology might be steered in the most productive direction.  Finally, we are committed to an ambitious educational mission that extends well beyond the walls of Stanford. We fundamentally believe that policy makers, journalists, corporate leaders, and other professionals need to understand the technology in order to separate hype from reality, and to distinguish genuine risks from the fears of science fiction. At the same time, AI engineers need exposure to ethical frameworks that might help them assess their responsibilities to the broader communities affected by the products they develop and deploy. HAI is an ambitious, even daunting endeavor, and will require significant financial support to make progress on its mission. Accordingly, we are seeking a broad base of funding from five sources: federal research funding, foundation support, individual philanthropy, corporate philanthropy, and corporate research grants. Seeking financial support from the broadest possible base helps ensure that the Institute is not dependent on any single source of funding.  While we value and appreciate the support of donors and partners, HAI maintains complete intellectual and operational independence. No outside entities set our research agenda or determine its outcome or dissemination. While we welcome advice from donors and non-donors alike, we determine our own operational activities, including decisions related to employment, policy recommendations, and events. We have published a more detailed document describing our fundraising policy.   Our research aims not only to explore new possibilities for AI, but to study and understand its impact on society, including critical issues of bias, privacy, ethics, safety, transparency, and the changing nature of work and automation. Stanford HAI can only fulfill these ambitions by engaging deeply with all stakeholders who are determining the future of the technology. We look to a wide variety of organizations for ideas, resources and perspectives, particularly as they formulate and drive AI innovation, policy and practice.  If we are to play a useful role in guiding such a powerful technology, we must commit to representing all stakeholders, as well as promoting dialogue and engagement among and between these stakeholder groups. In particular, since the human impact of AI will most directly be driven by technology companies, it is essential that their perspective be fairly represented alongside those who are critical of the industry.  In my last speech as provost of Stanford, in February 2017, I said, “Universities must remain open forums for contentious debate….  [We] all need worthy opponents to challenge us in our search for truth. It is absolutely essential to the quality of our enterprise.” This remains true today and applies equally to our work at HAI.  We view our work as an open dialog and welcome your comments and perspectives. Please feel free to reach out to us with questions or suggestions at HAI-Institute@stanford.edu.

Share
Link copied to clipboard!
Contributor(s)
HAI Co-Director John Etchemendy

Related News

Be Careful What You Tell Your AI Chatbot
Nikki Goth Itoi
Oct 15, 2025
News

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

News

Be Careful What You Tell Your AI Chatbot

Nikki Goth Itoi
Privacy, Safety, SecurityGenerative AIRegulation, Policy, GovernanceOct 15

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

How Congress Could Stifle The Onslaught Of AI-Generated Child Sexual Abuse Material
Tech Policy Press
Sep 25, 2025
Media Mention

HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.


Media Mention
Your browser does not support the video tag.

How Congress Could Stifle The Onslaught Of AI-Generated Child Sexual Abuse Material

Tech Policy Press
Ethics, Equity, InclusionPrivacy, Safety, SecurityRegulation, Policy, GovernanceSep 25

HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.


How To Keep Your Private Messages Truly Private
CNN
Sep 09, 2025
Media Mention

HAI Policy Fellow Riana Pfefferkorn discusses scenarios when third parties might be able to access personal messaging data and how to keep those forms of digital communication private.

Media Mention
Your browser does not support the video tag.

How To Keep Your Private Messages Truly Private

CNN
Privacy, Safety, SecuritySep 09

HAI Policy Fellow Riana Pfefferkorn discusses scenarios when third parties might be able to access personal messaging data and how to keep those forms of digital communication private.