Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
HAI Weekly Seminar with Vael Gates | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

HAI Weekly Seminar with Vael Gates

Status
Past
Date
Wednesday, June 01, 2022 10:00 AM - 11:00 AM PST/PDT
Location
Virtual
Topics
Ethics, Equity, Inclusion
Overview
Watch Event Recording

Researcher Perceptions of Current and Future AI

Overview
Watch Event Recording
Share
Link copied to clipboard!
Event Contact
Kaci Peel
kpeel@stanford.edu

Related Events

Dan Iancu & Antonio Skillicorn | Interpretable Machine Learning and Mixed Datasets for Predicting Child Labor in Ghana’s Cocoa Sector
SeminarMar 18, 202612:00 PM - 1:15 PM
March
18
2026

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

Seminar

Dan Iancu & Antonio Skillicorn | Interpretable Machine Learning and Mixed Datasets for Predicting Child Labor in Ghana’s Cocoa Sector

Mar 18, 202612:00 PM - 1:15 PM

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

Zoë Hitzig | How People Use ChatGPT
Mar 09, 202612:00 PM - 1:00 PM
March
09
2026

Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...

Event

Zoë Hitzig | How People Use ChatGPT

Mar 09, 202612:00 PM - 1:00 PM

Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...

Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education
SeminarMar 11, 202612:00 PM - 1:15 PM
March
11
2026
Seminar

Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education

Mar 11, 202612:00 PM - 1:15 PM

Artificial intelligence (AI) has enormous potential for both positive and negative impact, especially as we move from current-day systems towards more capable systems in the future. However, as a society we lack an understanding of how the developers of this technology, AI researchers, perceive the benefits and risks of their work, both in today's systems and impacts in the future. In this talk, Gates will present results from over 70 interviews with AI researchers, asking questions ranging from "What do you think are the largest benefits and risks of AI?" to "If you could change your colleagues’ perception of AI, what attitudes/beliefs would you want them to have?"

READINGS:

  • “The case for taking AI seriously as a threat to humanity” by Kelsey Piper (Vox)

  • Human-Compatible, by Stuart Russell

  • The Alignment Problem, by Brian Christian

  • The Precipice: Existential Risk and the Future of Humanity, by Toby Ord

  • The Most Important Century, specifically "Forecasting Transformative AI", by Holden Karnofsky

TECHNICAL READINGS:

  • Empirical work by DeepMind's Safety team on alignment

  • Empirical work by Anthropic on alignment 

  • Talk (and transcript) by Paul Christiano describing the AI alignment landscape in 2020

  • Podcast (and transcript) by Rohin Shah, describing the state of AI value alignment in 2021

  • Alignment Newsletter and ML Safety Newsletter

  • Unsolved Problems in ML Safety by Hendrycks et al. (2022)

  • Alignment Research Center

  • Interpretability work aimed at alignment: Elhage et al. (2021) and Olah et al. (2020)

  • AI Safety Resources by Victoria Krakovna (DeepMind) and Technical Alignment Curriculum

FUNDING:

  • Open Philanthropy Graduate Student Fellowship

  • Open Philanthropy Faculty Fellowship (faculty and others can reach out to OpenPhil directly as well)

  • FTX Future Fund

  • Long-Term Future Fund

STANFORD RESOURCES:

  • Stanford Center for AI Safety

  • Stanford Existential Risk Initiative

 

Contact Vael Gates at vlgates@stanford.edu for further questions or collaboration inquiries.

Speaker

Vael GatesVael Gates

HAI Network Affiliate

Vael received their Ph.D. in Neuroscience (Computational Cognitive Science) from UC Berkeley in 2021. During their Ph.D. they worked on formalizing and testing computational cognitive models of social collaboration. Their Ph.D.

No tweets available.