In this workshop, we will explore a suite of interactive tools, including the ChucK music programming language, ChAI, the Pandora audiovisual live coding environment, and Wekinator.
In this workshop, we will explore a suite of interactive tools, including the ChucK music programming language, ChAI, the Pandora audiovisual live coding environment, and Wekinator.
The Stanford Open Virtual Assistant Lab, with sponsorship from the Alfred P. Sloan Foundation and the Stanford Human-Centered Artificial Intelligence (HAI), is organizing an invitation-only workshop focused on the concept of a public AI Assistant to World Wide Knowledge (WWK) and its implications for the future of the Free Web.
The Stanford Open Virtual Assistant Lab, with sponsorship from the Alfred P. Sloan Foundation and the Stanford Human-Centered Artificial Intelligence (HAI), is organizing an invitation-only workshop focused on the concept of a public AI Assistant to World Wide Knowledge (WWK) and its implications for the future of the Free Web.
In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.
In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.
This workshop will highlight the significant impact of AI applications in the Department of Energy (DOE) science by showcasing SLAC's research program, which includes national-scale science facilities such as particle accelerators, x-ray lasers, and the Rubin Observatory.
This workshop will highlight the significant impact of AI applications in the Department of Energy (DOE) science by showcasing SLAC's research program, which includes national-scale science facilities such as particle accelerators, x-ray lasers, and the Rubin Observatory.
Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).
Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).
Barwise Room, Cordura Hall
Barwise Room, Cordura Hall
In the last year, open foundation models have proliferated widely. Given the rapid adoption of these models, cultivating a responsible open source AI ecosystem is crucial and urgent. Our workshop presents an opportunity to learn from experts in different fields who have worked on responsible release strategies, risk mitigation, and policy interventions that can help.
In the last year, open foundation models have proliferated widely. Given the rapid adoption of these models, cultivating a responsible open source AI ecosystem is crucial and urgent. Our workshop presents an opportunity to learn from experts in different fields who have worked on responsible release strategies, risk mitigation, and policy interventions that can help.
Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).
Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).
HAI Faculty Associate Director Susan Athey and new incoming HAI senior fellow Erik Brynjolfsson invited researchers working on AI and labor markets across the Stanford community to come together in a virtual event on May 18th, to present and discuss ongoing research and build ties for future collaborations.
How is AI impacting the labor market? How is it changing labor demand and supply, occupations, hiring, labor mobility, firm organization, and behavior? Various groups from different disciplines across campus ranging from Economics, Business, Management Science and Engineering, Politics, Sociology or Computer Science are working on these important questions from different angles, with different lenses and methodologies. The workshop aimed to bring the working group together to enable cross-disciplinary discussions and inspire future research collaborations.
HAI Faculty Associate Director Susan Athey and new incoming HAI senior fellow Erik Brynjolfsson invited researchers working on AI and labor markets across the Stanford community to come together in a virtual event on May 18th, to present and discuss ongoing research and build ties for future collaborations.
How is AI impacting the labor market? How is it changing labor demand and supply, occupations, hiring, labor mobility, firm organization, and behavior? Various groups from different disciplines across campus ranging from Economics, Business, Management Science and Engineering, Politics, Sociology or Computer Science are working on these important questions from different angles, with different lenses and methodologies. The workshop aimed to bring the working group together to enable cross-disciplinary discussions and inspire future research collaborations.
Workshop Leader: Emilie Silva
A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.
The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.
A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.
CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.
Workshop Leader: Emilie Silva
A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.
The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.
A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.
CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.
This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.
Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.
• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?
This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.
Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.
• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?
Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI. Topics included AI and explainability, AI and value alignment, governance of AI, and more.
Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI. Topics included AI and explainability, AI and value alignment, governance of AI, and more.