Lindsey Felt: Art, AI, and Disability Futures | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

eventSeminar

Lindsey Felt: Art, AI, and Disability Futures

Status
Past
Date
Wednesday, November 30, 2022 10:00 AM - 11:00 AM PST/PDT
Location
Hybrid 
Share
Link copied to clipboard!
Event Contact
Madeleine Wright
mwright7@stanford.edu

Related Events

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata
Apr 13, 202612:00 PM - 1:00 PM
April
13
2026

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.

Event

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata

Apr 13, 202612:00 PM - 1:00 PM

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?
SeminarApr 15, 202612:00 PM - 1:15 PM
April
15
2026

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Seminar

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?

Apr 15, 202612:00 PM - 1:15 PM

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Matt Beane | Precision Proactivity: Measuring Cognitive Load in Real-World AI-Assisted Work
Apr 20, 202612:00 PM - 1:00 PM
April
20
2026

Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.

Event

Matt Beane | Precision Proactivity: Measuring Cognitive Load in Real-World AI-Assisted Work

Apr 20, 202612:00 PM - 1:00 PM

Systems like ChatGPT and Claude assist billions through proactive dialogue—offering unsolicited, task-relevant information. Drawing on Cognitive Load Theory, we study how cognitive load shapes performance in AI assisted knowledge work.

HAI Weekly Seminar

Art, AI, and Disability Futures

In this talk, Lindsey D. Felt introduces a framework that locates disability innovation, artistry, and crip politics as central to the development of AI and technology. From M Eifler’s Prosthetic Memory to Paola Prestini’s Sensorium Ex, these examples of AI art highlight the erasures of disability from training data and refuse AI’s optimization against disability. Historically, technologies have been designed to diagnose, rehabilitate, normalize, and even cure disabilities. Though this approach has arguably improved the quality of life for many disabled people, it codes disability as an “undesirable” and “outlier” trait, operating on the false premise of a “norm” that is not reflective of the human condition’s heterogeneity. Researchers have demonstrated how machine learning tools are mirroring this trajectory, from autonomous vehicles that don’t recognize wheelchair users, to Natural Language Processing models that classify texts mentioning disability as more “toxic.” These biases are equally important to consider alongside racial and gender inequities for their wide-ranging social implications.

In conversation with artist-technologist M Eifler, Felt discusses approaches to human-centered AI art that are designed for self-care, mutual aid, and social justice-informed world-building. Felt and Eifler consider Prosthetic Memory, a digital memory bank created by Eifler that uses machine learning to retrieve self-recorded videos for the artist to navigate their memory dysregulation. Sensorium Ex, an experimental AI opera that introduces a new composite voice from an algorithm trained on non-normative speech patterns, similarly models the possibilities for a non-ableist AI. These works reflect the yearning for what scholar Alison Kafer calls “crip futurity,” a future where disabled people’s experiences, practices, stories, and ways of knowing are valued.

Slides for this Presentation

Lindsey FeltLindsey Felt

Leonardo CripTech Incubator Co-founder and Co-director; Lecturer in Program in Writing and Rhetoric, Stanford University

No tweets available.