Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Upcoming Events | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Upcoming Events

Previous Events at HAI

AllConferenceHAI SeminarsWorkshops
Uncertainty in AI
WorkshopDec 10, 20193:00 PM - 4:00 PM
December
10
2019

Faculty Leaders: Elaine Treharne and Mark Algee-Hewitt

 

This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.

Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.

• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?

Uncertainty in AI

Dec 10, 20193:00 PM - 4:00 PM

Faculty Leaders: Elaine Treharne and Mark Algee-Hewitt

 

This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.

Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.

• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?

Machine Learning
AI and Ethics
WorkshopOct 30, 20199:00 AM - 3:00 PM
October
30
2019

Faculty Leaders: Rob Reich and Seth Lazar

Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI.  Topics included AI and explainability, AI and value alignment, governance of AI, and more. 

AI and Ethics

Oct 30, 20199:00 AM - 3:00 PM

Faculty Leaders: Rob Reich and Seth Lazar

Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI.  Topics included AI and explainability, AI and value alignment, governance of AI, and more. 

Embedding the Human in AI Research
WorkshopOct 18, 201912:00 AM - 3:00 PM
October
18
2019

Faculty Leaders: Jeff Hancock, Angèle Christin, Gaby Harari, and Londa Schiebinger

 How can we integrate the human into work on artificial intelligence? How can we best define “human-centered”? Can HAI develop a mechanism that facilitates collaboration across disciplines to promote human-centered AI? These were some of the central questions that brought together 15 Stanford faculty members and researchers from the social sciences, humanities, and computer science for the “Embedding the Human in AI Research” workshop. As ethical AI guidelines are springing up, central questions of human-centeredness and effective collaboration remain open. Between 2011 and 2018, 84 ethical statements appeared globally, with 88% released after 2016 (Jobin, Ienca, & Vayena, Artificial Intelligence: the global landscape of ethics guidelines. Nature Machine Intelligence, 2019). Jobin et al., found that top topics of interest included: transparency, justice & fairness, non-maleficence, responsibility, and privacy. Not well represented was sustainability, defined as deploying AI to help protect the environment, improve the planet’s ecosystem, and promote peace. How do we put such ethical aspirations into action in HAI research? Can we develop a mechanism mechanism for HAI by which social scientists/humanists and technical people collaborate from the VERY BEGINNING when setting research priorities and formulating research questions?  Overall: There was excellent discussion. A number of participants were new faculty at Stanford. They express concerns about time spent on interdisciplinary work, but were intrigued and pleased to be invited. Participants raised questions about how cultural and structural approaches can better be integrated into AI research. While there is growing attention to ethics within technology, ethics is very individualized, despite the fact that inequalities and biases can be systematic.

Embedding the Human in AI Research

Oct 18, 201912:00 AM - 3:00 PM

Faculty Leaders: Jeff Hancock, Angèle Christin, Gaby Harari, and Londa Schiebinger

 How can we integrate the human into work on artificial intelligence? How can we best define “human-centered”? Can HAI develop a mechanism that facilitates collaboration across disciplines to promote human-centered AI? These were some of the central questions that brought together 15 Stanford faculty members and researchers from the social sciences, humanities, and computer science for the “Embedding the Human in AI Research” workshop. As ethical AI guidelines are springing up, central questions of human-centeredness and effective collaboration remain open. Between 2011 and 2018, 84 ethical statements appeared globally, with 88% released after 2016 (Jobin, Ienca, & Vayena, Artificial Intelligence: the global landscape of ethics guidelines. Nature Machine Intelligence, 2019). Jobin et al., found that top topics of interest included: transparency, justice & fairness, non-maleficence, responsibility, and privacy. Not well represented was sustainability, defined as deploying AI to help protect the environment, improve the planet’s ecosystem, and promote peace. How do we put such ethical aspirations into action in HAI research? Can we develop a mechanism mechanism for HAI by which social scientists/humanists and technical people collaborate from the VERY BEGINNING when setting research priorities and formulating research questions?  Overall: There was excellent discussion. A number of participants were new faculty at Stanford. They express concerns about time spent on interdisciplinary work, but were intrigued and pleased to be invited. Participants raised questions about how cultural and structural approaches can better be integrated into AI research. While there is growing attention to ethics within technology, ethics is very individualized, despite the fact that inequalities and biases can be systematic.
Ethics, Equity, Inclusion
Environmental Intelligence: Applications of AI to climate change, sustainability and environmental health
WorkshopJul 12, 201912:00 AM - 4:00 PM
July
12
2019

Faculty Leaders: Kate Maher and Carissa Carter

 In mid-July, a working group focused on AI for the environment convened to outline future directions that would leverage AI to address pressing environmental challenges, ranging from biodiversity and conservation biology to water availability and sustainable communities.  The group focused on the concept of building a thrivable planet for all species – not just one that is merely habitable.  With the backdrop of the Stanford Educational Farm, we leveraged a human-centered design process to focus on how we might harness AI to uniquely address a range of stakeholder needs. Our objective was to develop an array of prototype projects that lead to insights about future directions for AI in the environmental and sustainability realms. Project prototypes included halting slavery in the seafood industry, intelligent tools for ensuring water and food security, and intelligent approaches for managing species migration. Based on these projects, we identified the following overarching themes that would be exciting to pursue through collaborative research: (1) Predicting, detecting and mitigating or incentivizing environmental transitions, (2) quantifying well-being and compatibility with one’s environment,  (3) environmental justice and human rights, (4) opening of new data streams and achieving interoperability of existing data streams.

Environmental Intelligence: Applications of AI to climate change, sustainability and environmental health

Jul 12, 201912:00 AM - 4:00 PM

Faculty Leaders: Kate Maher and Carissa Carter

 In mid-July, a working group focused on AI for the environment convened to outline future directions that would leverage AI to address pressing environmental challenges, ranging from biodiversity and conservation biology to water availability and sustainable communities.  The group focused on the concept of building a thrivable planet for all species – not just one that is merely habitable.  With the backdrop of the Stanford Educational Farm, we leveraged a human-centered design process to focus on how we might harness AI to uniquely address a range of stakeholder needs. Our objective was to develop an array of prototype projects that lead to insights about future directions for AI in the environmental and sustainability realms. Project prototypes included halting slavery in the seafood industry, intelligent tools for ensuring water and food security, and intelligent approaches for managing species migration. Based on these projects, we identified the following overarching themes that would be exciting to pursue through collaborative research: (1) Predicting, detecting and mitigating or incentivizing environmental transitions, (2) quantifying well-being and compatibility with one’s environment,  (3) environmental justice and human rights, (4) opening of new data streams and achieving interoperability of existing data streams.
Energy, Environment
Future of Work Workshop
WorkshopFeb 01, 201912:00 AM - 3:00 PM
February
01
2019

Future of Work Workshop

Feb 01, 201912:00 AM - 3:00 PM
Workforce, Labor
1
2