Skip to main content Skip to secondary navigation
Page Content

 

In the six months since we launched the Stanford Institute for Human-Centered Artificial Intelligence (HAI), the environment around us has continued to evolve. Public trust in many technology companies, governments, and even academic institutions has been diminished, and in some cases, deservedly so. In recent weeks, we have seen growing attention to the topic of governance and influence at all institutions. At Stanford HAI, we welcome this increased scrutiny, even when it identifies our own shortcomings.

As an academic organization whose purpose is to study artificial intelligence and its impact on society, as well as to educate and serve a global community, my colleagues and I believe it is important that we be transparent about our values, methods and motivations. In that spirit, we have published a values statement on our website.

Our stated mission at HAI is to advance AI research, education, policy, and practice to improve the human condition. We take that mission seriously. For example, we do not fund applications of AI unless they are aimed, directly or indirectly, at enhancing human wellbeing. More generally, we promote basic and applied AI research that addresses fundamental questions about the technology, or that enables it to interact more flexibly with human users and to accommodate their needs.  

In the policy sphere, we make no pretense of having answers to the many difficult questions raised by AI and its increasing impact on society. We are, however, committed to understanding that impact, supporting evidence-based policy research, and convening stakeholders from industry, government, academia, and civil society to discuss and debate how the technology might be steered in the most productive direction. 

Finally, we are committed to an ambitious educational mission that extends well beyond the walls of Stanford. We fundamentally believe that policy makers, journalists, corporate leaders, and other professionals need to understand the technology in order to separate hype from reality, and to distinguish genuine risks from the fears of science fiction. At the same time, AI engineers need exposure to ethical frameworks that might help them assess their responsibilities to the broader communities affected by the products they develop and deploy.

HAI is an ambitious, even daunting endeavor, and will require significant financial support to make progress on its mission. Accordingly, we are seeking a broad base of funding from five sources: federal research funding, foundation support, individual philanthropy, corporate philanthropy, and corporate research grants. Seeking financial support from the broadest possible base helps ensure that the Institute is not dependent on any single source of funding. 

While we value and appreciate the support of donors and partners, HAI maintains complete intellectual and operational independence. No outside entities set our research agenda or determine its outcome or dissemination. While we welcome advice from donors and non-donors alike, we determine our own operational activities, including decisions related to employment, policy recommendations, and events. We have published a more detailed document describing our fundraising policy.  

Our research aims not only to explore new possibilities for AI, but to study and understand its impact on society, including critical issues of bias, privacy, ethics, safety, transparency, and the changing nature of work and automation. Stanford HAI can only fulfill these ambitions by engaging deeply with all stakeholders who are determining the future of the technology. We look to a wide variety of organizations for ideas, resources and perspectives, particularly as they formulate and drive AI innovation, policy and practice. 

If we are to play a useful role in guiding such a powerful technology, we must commit to representing all stakeholders, as well as promoting dialogue and engagement among and between these stakeholder groups. In particular, since the human impact of AI will most directly be driven by technology companies, it is essential that their perspective be fairly represented alongside those who are critical of the industry. 

In my last speech as provost of Stanford, in February 2017, I said, “Universities must remain open forums for contentious debate….  [We] all need worthy opponents to challenge us in our search for truth. It is absolutely essential to the quality of our enterprise.” This remains true today and applies equally to our work at HAI.  


We view our work as an open dialog and welcome your comments and perspectives. Please feel free to reach out to us with questions or suggestions at HAI-Institute@stanford.edu.

More News Topics