Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Ram Shankar Siva Kumar | A Few Useful Lessons about AI Red Teaming | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

eventSeminar

Ram Shankar Siva Kumar | A Few Useful Lessons about AI Red Teaming

Status
Past
Date
Wednesday, October 18, 2023 10:00 AM - 11:00 AM PST/PDT
Location
Hybrid

AI red teaming is exploding in popularity: At DEF CON this year, more than 2,500 hackers descended to red-team AI systems. Every organization investing in AI—from Microsoft to Google to Meta to NVIDIA—has AI red teams to actively secure their AI systems

But what does it even mean to red-team AI systems? Grounded in case studies from Microsoft, Siva Kumar contextualizes how red-teaming AI systems differs from red-teaming traditional software systems, discusses how it intersects with previous lines of inquiry such as adversarial examples, and distills eight lessons from a practitioner’s perspective. 

Speaker
Ram Shankar Siva Kumar
Data Cowboy, Microsoft; Author, "Not With a Bug"

Watch Event Recording

Share
Link copied to clipboard!
Event Contact
Madeleine Wright
mwright7@stanford.edu
More from HAI and SDS seminars
  • Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education
    SeminarMar 11, 202612:00 PM - 1:15 PM
    March
    11
    2026

Related Events

Zoë Hitzig | How People Use ChatGPT
Mar 09, 202612:00 PM - 1:00 PM
March
09
2026

Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...

Event

Zoë Hitzig | How People Use ChatGPT

Mar 09, 202612:00 PM - 1:00 PM

Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...

Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education
SeminarMar 11, 202612:00 PM - 1:15 PM
March
11
2026
Seminar

Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education

Mar 11, 202612:00 PM - 1:15 PM
Joel Becker | Reconciling Impressive AI Benchmark Performance with Limited Developer Productivity Impacts
Mar 16, 202612:00 PM - 1:00 PM
March
16
2026

AI coding agents now complete multi-hour coding benchmarks with roughly 50% reliability, yet a randomized trial found experienced open-source developers took about 19% longer when allowed frontier AI tools than when tools were disallowed...

Event

Joel Becker | Reconciling Impressive AI Benchmark Performance with Limited Developer Productivity Impacts

Mar 16, 202612:00 PM - 1:00 PM

AI coding agents now complete multi-hour coding benchmarks with roughly 50% reliability, yet a randomized trial found experienced open-source developers took about 19% longer when allowed frontier AI tools than when tools were disallowed...