Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps

Status
Past
Date
Wednesday, December 03, 2025 12:00 PM - 1:15 PM PST/PDT
Location
353 Jane Stanford Way, Stanford, CA, 94305 | Room 119
Topics
Regulation, Policy, Governance
Privacy, Safety, Security
Generative AI
Overview
Watch Event Recording

AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.


AI-generated child sexual abuse material (AI CSAM) carries unique harms. When generated from a photo of a clothed child, it can damage that child’s reputation and cause serious distress. AI CSAM has become easier to create thanks to the proliferation of generative AI software programs that are commonly called “nudify,” “undress,” or “face-swapping” apps, which are purpose-built to let unskilled users make pornographic images. Since 2023, multiple schools in the U.S. and elsewhere have experienced incidents where male students have victimized their female peers using these apps.

In our paper, “AI-Generated Child Sexual Abuse Material: Insights from Educators, Platforms, Law Enforcement, Legislators, and Victims,” we assess how educators, platforms, law enforcement, state legislators, and AI CSAM victims are thinking about and responding to AI CSAM. Through 52 interviews conducted between June 2024 and May 2025 and a review of documents from four public school districts and state legislation, we find that the prevalence of AI CSAM in schools remains unclear but not overwhelmingly high at present. Schools thus have a chance to proactively prepare their AI CSAM prevention and response strategies.

Speaker
Riana Pfefferkorn
Riana Pfefferkorn
Policy Fellow, Stanford HAI
Overview
Watch Event Recording
Share
Link copied to clipboard!
Event Contact
Stanford HAI
stanford-hai@stanford.edu
More from HAI and SDS seminars
  • Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education
    SeminarMar 11, 202612:00 PM - 1:15 PM
    March
    11
    2026
Related
  • How Do We Protect Children in the Age of AI?
    Nikki Goth Itoi
    Sep 08
    news

    Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.

  • Riana Pfefferkorn
    Policy Fellow, Stanford HAI
    Riana Pfefferkorn

Related Events

Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education
SeminarMar 11, 202612:00 PM - 1:15 PM
March
11
2026
Seminar

Hari Subramonyam | Learning by Creating: A Human-Centered Vision for AI in Education

Mar 11, 202612:00 PM - 1:15 PM
Zoë Hitzig | How People Use ChatGPT
Mar 09, 202612:00 PM - 1:00 PM
March
09
2026

Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...

Event

Zoë Hitzig | How People Use ChatGPT

Mar 09, 202612:00 PM - 1:00 PM

Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...