Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Date
July 21, 2025
Topics
Privacy, Safety, Security
Education, Skills
Read Paper
abstract

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Key Takeaways

  • Most schools are not talking to students about the risks of AI-generated child sexual abuse material (CSAM), specifically via “nudify” apps; nor are they training educators how to respond to incidents of students making and circulating so-called “deepfake nudes” of other students.

  • While many states have recently criminalized AI CSAM, most fail to address how schools should establish appropriate frameworks for handling child offenders who create or share deepfake nudes.

  • To ensure schools respond proactively and appropriately, states should update mandated reporting and school discipline policies to clarify whether educators must report deepfake nude incidents, and consider explicitly defining such behavior as cyberbullying.

  • Criminalization is not a one-size-fits-all solution for minors; state responses to student-on-student AI CSAM incidents should prioritize behavioral interventions over punitive measures, grounded in child development, trauma-informed practices, and educational equity.

Executive Summary

Starting in 2023, researchers found that generative AI models were being misused to create sexually explicit images of children. AI-generated child sexual abuse material (CSAM) has become easier to create thanks to the proliferation of generative AI software programs that are commonly called “nudify,” “undress,” or “face-swapping” apps, which are purpose-built to let unskilled users make pornographic images. Some of those users are children themselves.

In our paper, “AI-Generated Child Sexual Abuse Material: Insights from Educators, Platforms, Law Enforcement, Legislators, and Victims,” we assess how several stakeholder groups are thinking about and responding to AI CSAM. Through 52 interviews conducted between mid-2024 and early 2025 and a review of documents from four public school districts, we find that the prevalence of AI CSAM in schools remains unclear but appears to be not overwhelmingly high at present. Schools therefore have a chance to proactively prepare their AI CSAM prevention and response strategies.

The AI CSAM phenomenon is testing the existing legal regimes that govern various affected sectors of society, illuminating some gaps and ambiguities. While legislators in Congress and around the United States have taken action in recent years to address some aspects of the AI CSAM problem, opportunities for further regulation or clarification remain. In particular, there is a need for policymakers at the state level to decide what to do about children who create and disseminate AI CSAM of other children, and, relatedly, to elucidate schools’ obligations with respect to such incidents.

The AI CSAM Problem

AI image generation models are abused to create CSAM in several ways. Some AI-generated imagery depicts children who do not exist in real life, though the AI models used to create such material are commonly trained on actual abuse imagery. Another type of AI-generated CSAM involves real, identifiable children, such as known abuse victims from existing CSAM series, famous children (e.g., actors or influencers), or a child known to the person who generated the image. AI tools are used to modify an innocuous image of the child to appear as though the child is engaged in sexually explicit conduct. This type of CSAM is commonly referred to as “morphed” imagery.

The difficulty of making AI CSAM varies. Many mainstream generative AI platforms have committed to combating the abuse of their services for CSAM purposes. Creating bespoke AI-generated imagery depicting a specific child sex abuse scenario thus still entails some amount of technical know-how, such as prompt engineering or fine-tuning open-source models. By contrast, nudify apps, which are trained on datasets of pornographic imagery, take an uploaded photo of a clothed person (either snapped by the perpetrator, or sourced from a social media account, school website, etc.) and quickly return a realistic-looking but fake nude image.

Nudify apps enable those with no particular skills in AI or computer graphics to create so-called “deepfake nudes” or “deepfake porn”—rapidly and potentially of numerous individuals at scale, and typically without the depicted person’s consent. What’s more, nudify apps do not consistently prohibit the upload of images of underage individuals either in their terms of service or in practice. Their ease of use and lack of restrictions have made them an avenue for the expeditious creation of AI CSAM—including by users who are themselves underage.

Beginning in mid-2023, male students at several U.S. middle and high schools (both public and private) reportedly used AI to make deepfake nudes of their female classmates. A high-profile case in New Jersey was followed by widely reported incidents in Texas, Washington, Florida, Pennsylvania, and multiple incidents in Southern California. More recently, media have reported on additional incidents in Pennsylvania, Florida, and Iowa. There have also been several reported occurrences internationally since 2023.

It is not clear what these cases imply about the prevalence of student-on-student incidents involving deepfake nudes. That they are so spread out geographically could be read to indicate a widespread problem in schools. On the other hand, the number of reported incidents nationwide remains minuscule for a country with over 54 million schoolchildren.

Read Paper
Share
Link copied to clipboard!
Authors
  • Riana Pfefferkorn
    Riana Pfefferkorn

Related Publications

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee
Jennifer King
Quick ReadNov 18, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Testimony

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee

Jennifer King
Privacy, Safety, SecurityQuick ReadNov 18

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Validating Claims About AI: A Policymaker’s Guide
Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Quick ReadSep 24, 2025
Policy Brief

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Policy Brief

Validating Claims About AI: A Policymaker’s Guide

Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Foundation ModelsPrivacy, Safety, SecurityQuick ReadSep 24

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Response to the Department of Education’s Request for Information on AI in Education
Victor R. Lee, Vanessa Parli, Isabelle Hau, Patrick Hynes, Daniel Zhang
Quick ReadAug 20, 2025
Response to Request

Stanford scholars respond to a federal RFI on advancing AI in education, urging policymakers to anchor their approach in proven research.

Response to Request

Response to the Department of Education’s Request for Information on AI in Education

Victor R. Lee, Vanessa Parli, Isabelle Hau, Patrick Hynes, Daniel Zhang
Education, SkillsRegulation, Policy, GovernanceQuick ReadAug 20

Stanford scholars respond to a federal RFI on advancing AI in education, urging policymakers to anchor their approach in proven research.

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Issue Brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Regulation, Policy, GovernancePrivacy, Safety, SecurityQuick ReadJun 30

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.