Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps

Status
Past
Date
Wednesday, December 03, 2025 12:00 PM - 1:15 PM PST/PDT
Location
353 Jane Stanford Way, Stanford, CA, 94305 | Room 119
Topics
Regulation, Policy, Governance
Privacy, Safety, Security
Generative AI
Overview
Watch Event Recording

AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.


AI-generated child sexual abuse material (AI CSAM) carries unique harms. When generated from a photo of a clothed child, it can damage that child’s reputation and cause serious distress. AI CSAM has become easier to create thanks to the proliferation of generative AI software programs that are commonly called “nudify,” “undress,” or “face-swapping” apps, which are purpose-built to let unskilled users make pornographic images. Since 2023, multiple schools in the U.S. and elsewhere have experienced incidents where male students have victimized their female peers using these apps.

In our paper, “AI-Generated Child Sexual Abuse Material: Insights from Educators, Platforms, Law Enforcement, Legislators, and Victims,” we assess how educators, platforms, law enforcement, state legislators, and AI CSAM victims are thinking about and responding to AI CSAM. Through 52 interviews conducted between June 2024 and May 2025 and a review of documents from four public school districts and state legislation, we find that the prevalence of AI CSAM in schools remains unclear but not overwhelmingly high at present. Schools thus have a chance to proactively prepare their AI CSAM prevention and response strategies.

Speaker
Riana Pfefferkorn
Riana Pfefferkorn
Policy Fellow, Stanford HAI
Overview
Watch Event Recording
Share
Link copied to clipboard!
Event Contact
Stanford HAI
stanford-hai@stanford.edu
More from HAI and SDS seminars
  • Inside the 2026 AI Index Report | Stanford HAI
    SeminarMay 20, 202612:00 PM - 1:15 PM
    May
    20
    2026

    The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Related
  • How Do We Protect Children in the Age of AI?
    Nikki Goth Itoi
    Sep 08
    news

    Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.

  • Riana Pfefferkorn
    Policy Fellow, Stanford HAI
    Riana Pfefferkorn

Related Events

Inside the 2026 AI Index Report | Stanford HAI
SeminarMay 20, 202612:00 PM - 1:15 PM
May
20
2026

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Seminar

Inside the 2026 AI Index Report | Stanford HAI

May 20, 202612:00 PM - 1:15 PM

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition
SeminarMay 27, 202612:00 PM - 1:15 PM
May
27
2026

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Seminar

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition

May 27, 202612:00 PM - 1:15 PM

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata
Apr 13, 202612:00 PM - 1:00 PM
April
13
2026

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.

Event

Ashesh Rambachan | From Next-Token Prediction to Automatic Induction of Automata

Apr 13, 202612:00 PM - 1:00 PM

Sequence data is ubiquitous in economics — job histories in labor economics, diagnosis and treatment sequences in health economics, strategic interactions in game theory. Generative sequence models can learn to predict these sequences well, but their complexity makes it hard to extract interpretable economic insights from their predictions.