Workshop on Interactive AI Systems for Live Audiovisual Performance | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

eventWorkshop

Workshop on Interactive AI Systems for Live Audiovisual Performance

Status
Past
Date
Wednesday, March 05, 2025 10:00 AM - 5:00 PM PST/PDT
Location
Gates Computer Science Building Room 119
Topics
Arts, Humanities
Overview
Speakers

Watch Event Recording

Overview

In this workshop, we will explore a suite of interactive tools, including the ChucK music programming language, ChAI, the Pandora audiovisual live coding environment, and Wekinator.

The proliferation of generative AI tools has raised important questions around how we (especially artists) create. Using AI tools in the creative process can blur the lines between creativity and curation and change how we navigate our relationship with labor and craft. It can feel as if we do less to do more, synthesizing previous or external inputs into new material with the help of generative AI systems. What does this do to our relationship with our creative labor, with other human beings, and to our aesthetic leanings? In this module, we explore the possibilities of human-centered, humanistic approaches to instrument building, tool design, and audiovisual performance, prioritizing human interaction and sensible curation in the creative process with AI. Our approach seeks to foreground the value of the human artistic process over any lens that views tools, products, or technical novelty as ends in themselves.

In a broader sense, one might ask if audiovisual performance is a “problem” that needs to be “solved”? In this course, we resist the notion that AI tools will “solve” anything about the creative process, but rather that they may provide new possibilities for the artist to synthesize their work via radical new combinations of (multi)-media. 

In this workshop, we will explore a suite of interactive tools, including the ChucK music programming language, ChAI, the Pandora audiovisual live coding environment, and Wekinator. Participants are also encouraged to incorporate any other external tools that may enhance the workshop experience and align with its objectives.

Besides acquiring skills using a variety of softwares, learning about creative applications machine learning, and developing a multimodal understanding of data and is communication,  the most important educational outcome of this course is exposure to a creative mindset that centers process and affords a mode of creative questioning in the use of generative tools.

The first hour and 15 minutes of this event is lecture-style and open to in person and Zoom participants. After 11:15am, the event will be for in person attendees only.

Attendees, please bring your own computers and headphones. Airpods will not work for the workshop. 

Overview
Speakers
Share
Link copied to clipboard!
Event Contact
Annie Benisch
abenisch@stanford.edu
Related
  • Ge Wang
    Associate Professor of Music and Associate Professor, by courtesy, of Computer Science, Stanford | Associate Director and Senior Fellow, Stanford HAI

Related Events

Inside the 2026 AI Index Report | Stanford HAI
SeminarMay 20, 202612:00 PM - 1:15 PM
May
20
2026

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Seminar

Inside the 2026 AI Index Report | Stanford HAI

May 20, 202612:00 PM - 1:15 PM

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Juan Sebastián Gómez-Cañón | Challenges And Opportunities For Human-Centered Music Emotion Recognition
SeminarJun 03, 202612:00 PM - 1:15 PM
June
03
2026

Music is intertwined with human emotion, memory, and identity, making it a powerful medium for affective experience and regulation.

Seminar

Juan Sebastián Gómez-Cañón | Challenges And Opportunities For Human-Centered Music Emotion Recognition

Jun 03, 202612:00 PM - 1:15 PM

Music is intertwined with human emotion, memory, and identity, making it a powerful medium for affective experience and regulation.

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata
Apr 13, 202612:00 PM - 1:00 PM
April
13
2026

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.

Event

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata

Apr 13, 202612:00 PM - 1:00 PM

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.