Taming Silicon Valley: Peter Norvig in Conversation with Gary Marcus | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

eventSeminar

Taming Silicon Valley: Peter Norvig in Conversation with Gary Marcus

Status
Past
Date
Tuesday, September 24, 2024 10:00 AM - 11:00 AM PST/PDT
Location
Gates Computer Science Building Room 119 (353 Serra Mall Stanford, CA 94305)
Topics
Sciences (Social, Health, Biological, Physical)
Democracy
Ethics, Equity, Inclusion
Healthcare
Government, Public Administration
Industry, Innovation
Law Enforcement and Justice
Attend Virtually
Overview
Event Recording

Peter Norvig in Conversation with Gary Marcus

Abstract: 

AI could make society or break it. It could revolutionize science, medicine, and technology, and deliver us a world of abundance and better health. Or it could lead to the downfall of democracy, an explosion in cybercrime, or possibly even worse. It’s also been wildly oversold. 

In this seminar, moderator Peter Norvig and speaker Gary Marcus explain why current AI is both morally and technically inadequate, and what we need to do as a society – and as individual citizens – to get to AI that works for all of us.

Speakers
Gary Marcus
Emeritus Professor of Psychology and Neural Science, New York University (NYU); Founder and CEO, Geometric.AI
Peter Norvig
Distinguished Education Fellow, Stanford HAI
Overview
Event Recording
Share
Link copied to clipboard!
Event Contact
HAI Events Team
stanford-hai@stanford.edu
More from HAI and SDS seminars
  • Inside the 2026 AI Index Report | Stanford HAI
    SeminarMay 20, 202612:00 PM - 1:15 PM
    May
    20
    2026

    The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Related
  • Gary Marcus
    Emeritus Professor of Psychology and Neural Science, New York University (NYU); Founder and CEO, Geometric.AI
  • Big Tech Fails Transparency Test: Gary Marcus On What We Should Demand of AI
    Big Think
    Feb 04
    media mention

    A team of researchers from Stanford HAI, MIT, and Princeton created the Foundation Model Transparency Index, which rated the transparency of 10 AI companies; each one received a failing grade.

  • The 12 Greatest Dangers Of AI
    Forbes
    Oct 09
    media mention

    AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate. 

Related Events

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata
Apr 13, 202612:00 PM - 1:00 PM
April
13
2026

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.

Event

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata

Apr 13, 202612:00 PM - 1:00 PM

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?
SeminarApr 15, 202612:00 PM - 1:15 PM
April
15
2026

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Seminar

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?

Apr 15, 202612:00 PM - 1:15 PM

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Matt Beane | Precision Proactivity: Measuring Cognitive Load in Real-World AI-Assisted Work
Apr 20, 202612:00 PM - 1:00 PM
April
20
2026

Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.

Event

Matt Beane | Precision Proactivity: Measuring Cognitive Load in Real-World AI-Assisted Work

Apr 20, 202612:00 PM - 1:00 PM

Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.