Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
HAI Weekly Seminar with Vael Gates | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

HAI Weekly Seminar with Vael Gates

Status
Past
Date
Wednesday, June 01, 2022 10:00 AM - 11:00 AM PST/PDT
Location
Virtual
Topics
Ethics, Equity, Inclusion
Overview
Watch Event Recording

Researcher Perceptions of Current and Future AI

Overview
Watch Event Recording
Share
Link copied to clipboard!
Event Contact
Kaci Peel
kpeel@stanford.edu

Related Events

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?
SeminarApr 15, 202612:00 PM - 1:15 PM
April
15
2026

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Seminar

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?

Apr 15, 202612:00 PM - 1:15 PM

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Juan Sebastián Gómez-Cañón | Challenges And Opportunities For Human-Centered Music Emotion Recognition
SeminarJun 03, 202612:00 PM - 1:15 PM
June
03
2026

Music is intertwined with human emotion, memory, and identity, making it a powerful medium for affective experience and regulation.

Seminar

Juan Sebastián Gómez-Cañón | Challenges And Opportunities For Human-Centered Music Emotion Recognition

Jun 03, 202612:00 PM - 1:15 PM

Music is intertwined with human emotion, memory, and identity, making it a powerful medium for affective experience and regulation.

Artificial intelligence (AI) has enormous potential for both positive and negative impact, especially as we move from current-day systems towards more capable systems in the future. However, as a society we lack an understanding of how the developers of this technology, AI researchers, perceive the benefits and risks of their work, both in today's systems and impacts in the future. In this talk, Gates will present results from over 70 interviews with AI researchers, asking questions ranging from "What do you think are the largest benefits and risks of AI?" to "If you could change your colleagues’ perception of AI, what attitudes/beliefs would you want them to have?"

READINGS:

  • “The case for taking AI seriously as a threat to humanity” by Kelsey Piper (Vox)

  • Human-Compatible, by Stuart Russell

  • The Alignment Problem, by Brian Christian

  • The Precipice: Existential Risk and the Future of Humanity, by Toby Ord

  • The Most Important Century, specifically "Forecasting Transformative AI", by Holden Karnofsky

TECHNICAL READINGS:

  • Empirical work by DeepMind's Safety team on alignment

  • Empirical work by Anthropic on alignment 

  • Talk (and transcript) by Paul Christiano describing the AI alignment landscape in 2020

  • Podcast (and transcript) by Rohin Shah, describing the state of AI value alignment in 2021

  • Alignment Newsletter and ML Safety Newsletter

  • Unsolved Problems in ML Safety by Hendrycks et al. (2022)

  • Alignment Research Center

  • Interpretability work aimed at alignment: Elhage et al. (2021) and Olah et al. (2020)

  • AI Safety Resources by Victoria Krakovna (DeepMind) and Technical Alignment Curriculum

FUNDING:

  • Open Philanthropy Graduate Student Fellowship

  • Open Philanthropy Faculty Fellowship (faculty and others can reach out to OpenPhil directly as well)

  • FTX Future Fund

  • Long-Term Future Fund

STANFORD RESOURCES:

  • Stanford Center for AI Safety

  • Stanford Existential Risk Initiative

 

Contact Vael Gates at vlgates@stanford.edu for further questions or collaboration inquiries.

Speaker

Vael GatesVael Gates

HAI Network Affiliate

Vael received their Ph.D. in Neuroscience (Computational Cognitive Science) from UC Berkeley in 2021. During their Ph.D. they worked on formalizing and testing computational cognitive models of social collaboration. Their Ph.D.

No tweets available.