HAI Weekly Seminar with Carlos Ernesto Guestrin | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

HAI Weekly Seminar with Carlos Ernesto Guestrin

Status
Past
Date
Wednesday, March 09, 2022 10:00 AM - 11:00 AM PST/PDT
Location
Virtual
Share
Link copied to clipboard!
Event Contact
Kaci Peel
kpeel@stanford.edu

Related Events

Matt Beane | Precision Proactivity: Measuring Cognitive Load in Real-World AI-Assisted Work
Apr 20, 202612:00 PM - 1:00 PM
April
20
2026

Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.

Event

Matt Beane | Precision Proactivity: Measuring Cognitive Load in Real-World AI-Assisted Work

Apr 20, 202612:00 PM - 1:00 PM

Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.

AI+Science: Accelerating Discovery
ConferenceMay 05, 20268:30 AM - 5:00 PM
May
05
2026

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery.

Conference

AI+Science: Accelerating Discovery

May 05, 20268:30 AM - 5:00 PM

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery.

Wolfgang Lehrach | Code World Models for General Game Playing
SeminarMay 13, 202612:00 PM - 1:15 PM
May
13
2026

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

Seminar

Wolfgang Lehrach | Code World Models for General Game Playing

May 13, 202612:00 PM - 1:15 PM

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

How Can You Trust Machine Learning?

Machine learning (ML) and AI systems are becoming integral to every aspect of our lives. As these technologies make more decisions for us, and the underlying ML systems become increasingly complex, it is natural to ask: How can I trust machine learning? In this talk, Carlos Ernesto Guestrin will present a framework anchored on three pillars—clarity, competence and alignment—for driving increased trust in ML. For clarity, Guestrin will cover methods to make the predictions of machine learning more explainable. For competence, he will focus on means for evaluating and testing ML models with the same rigor we apply to software products. For alignment, Guestrin will describe the challenges of aligning the behaviors of an AI with the values we want to reflect in the world, along with methods that can yield more aligned outcomes. The discussion will touch on both algorithmic and human processes that can help lead to AIs that are more effective, impactful and trustworthy.

Carlos GuestrinCarlos Ernesto Guestrin

Professor of Computer Science, Stanford University

No tweets available.