Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
The Stanford Open Virtual Assistant Lab, with sponsorship from the Alfred P. Sloan Foundation and the Stanford Human-Centered Artificial Intelligence (HAI), is organizing an invitation-only workshop focused on the concept of a public AI Assistant to World Wide Knowledge (WWK) and its implications for the future of the Free Web.
The Stanford Open Virtual Assistant Lab, with sponsorship from the Alfred P. Sloan Foundation and the Stanford Human-Centered Artificial Intelligence (HAI), is organizing an invitation-only workshop focused on the concept of a public AI Assistant to World Wide Knowledge (WWK) and its implications for the future of the Free Web.
In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.
In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.
This workshop will highlight the significant impact of AI applications in the Department of Energy (DOE) science by showcasing SLAC's research program, which includes national-scale science facilities such as particle accelerators, x-ray lasers, and the Rubin Observatory.
This workshop will highlight the significant impact of AI applications in the Department of Energy (DOE) science by showcasing SLAC's research program, which includes national-scale science facilities such as particle accelerators, x-ray lasers, and the Rubin Observatory.
Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).
Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).
Barwise Room, Cordura Hall
Barwise Room, Cordura Hall
Experts from diverse fields convene to discuss what digital technologies can do to enhance civic debate and de-escalate political conflict.
Experts from diverse fields convene to discuss what digital technologies can do to enhance civic debate and de-escalate political conflict.
In the last year, open foundation models have proliferated widely. Given the rapid adoption of these models, cultivating a responsible open source AI ecosystem is crucial and urgent. Our workshop presents an opportunity to learn from experts in different fields who have worked on responsible release strategies, risk mitigation, and policy interventions that can help.
In the last year, open foundation models have proliferated widely. Given the rapid adoption of these models, cultivating a responsible open source AI ecosystem is crucial and urgent. Our workshop presents an opportunity to learn from experts in different fields who have worked on responsible release strategies, risk mitigation, and policy interventions that can help.
Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).
Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).
The First Meeting of the IEEE Planet Positive 2030 Community: Advancing Technology for a Sustainable Planet
The First Meeting of the IEEE Planet Positive 2030 Community: Advancing Technology for a Sustainable Planet
The workshop convened leading academics, computer vision experts, and representatives from civil society, government, and industry to discuss critical questions and develop a whitepaper that makes recommendations related to assessing the performance of facial recognition technology.
The workshop convened leading academics, computer vision experts, and representatives from civil society, government, and industry to discuss critical questions and develop a whitepaper that makes recommendations related to assessing the performance of facial recognition technology.
Workshop Leader: Emilie Silva
A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.
The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.
A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.
CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.
Workshop Leader: Emilie Silva
A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.
The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.
A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.
CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.