Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps | Stanford HAI
Your browser does not support the video tag.
eventSeminar

Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps

Status
Past
Date
Wednesday, December 03, 2025 12:00 PM - 1:15 PM PST/PDT
Location
353 Jane Stanford Way, Stanford, CA, 94305 | Room 119
Topics
Regulation, Policy, Governance
Privacy, Safety, Security
Generative AI
Overview
Watch Event Recording

AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.


AI-generated child sexual abuse material (AI CSAM) carries unique harms. When generated from a photo of a clothed child, it can damage that child’s reputation and cause serious distress. AI CSAM has become easier to create thanks to the proliferation of generative AI software programs that are commonly called “nudify,” “undress,” or “face-swapping” apps, which are purpose-built to let unskilled users make pornographic images. Since 2023, multiple schools in the U.S. and elsewhere have experienced incidents where male students have victimized their female peers using these apps.

In our paper, “AI-Generated Child Sexual Abuse Material: Insights from Educators, Platforms, Law Enforcement, Legislators, and Victims,” we assess how educators, platforms, law enforcement, state legislators, and AI CSAM victims are thinking about and responding to AI CSAM. Through 52 interviews conducted between June 2024 and May 2025 and a review of documents from four public school districts and state legislation, we find that the prevalence of AI CSAM in schools remains unclear but not overwhelmingly high at present. Schools thus have a chance to proactively prepare their AI CSAM prevention and response strategies.

Speaker
Riana Pfefferkorn
Riana Pfefferkorn
Policy Fellow, Stanford HAI
Overview
Watch Event Recording
Share
Link copied to clipboard!
Event Contact
Stanford HAI
stanford-hai@stanford.edu
Related
  • How Do We Protect Children in the Age of AI?
    Nikki Goth Itoi
    Sep 08
    news

    Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.

  • Riana Pfefferkorn
    Policy Fellow, Stanford HAI
    Riana Pfefferkorn

Related Events

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition
SeminarMay 27, 202612:00 PM - 1:15 PM
May
27
2026

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Seminar

Eyck Freymann | AI and Strategic Stability: A Framework for U.S.–China Technology Competition

May 27, 202612:00 PM - 1:15 PM

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?
SeminarApr 15, 202612:00 PM - 1:15 PM
April
15
2026

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

Seminar

Caroline Meinhardt, Thomas Mullaney, Juan N. Pava, and Diyi Yang | How Can AI Support Language Digitization and Digital Inclusion?

Apr 15, 202612:00 PM - 1:15 PM

What does digital inclusion look like in the age of AI? Over 6,000 of the world’s 7,000-plus living languages remain digitally disadvantaged.

AI+Science: Accelerating Discovery
ConferenceMay 05, 20268:30 AM - 5:00 PM
May
05
2026

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery. Experts will separate hype from reality, spotlighting where AI is already enabling genuine breakthroughs and where its limits and risks remain.

Conference

AI+Science: Accelerating Discovery

May 05, 20268:30 AM - 5:00 PM

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery. Experts will separate hype from reality, spotlighting where AI is already enabling genuine breakthroughs and where its limits and risks remain.