Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.
Key Takeaways
Most schools are not talking to students about the risks of AI-generated child sexual abuse material (CSAM), specifically via “nudify” apps; nor are they training educators how to respond to incidents of students making and circulating so-called “deepfake nudes” of other students.
While many states have recently criminalized AI CSAM, most fail to address how schools should establish appropriate frameworks for handling child offenders who create or share deepfake nudes.
To ensure schools respond proactively and appropriately, states should update mandated reporting and school discipline policies to clarify whether educators must report deepfake nude incidents, and consider explicitly defining such behavior as cyberbullying.
Criminalization is not a one-size-fits-all solution for minors; state responses to student-on-student AI CSAM incidents should prioritize behavioral interventions over punitive measures, grounded in child development, trauma-informed practices, and educational equity.
Executive Summary
Starting in 2023, researchers found that generative AI models were being misused to create sexually explicit images of children. AI-generated child sexual abuse material (CSAM) has become easier to create thanks to the proliferation of generative AI software programs that are commonly called “nudify,” “undress,” or “face-swapping” apps, which are purpose-built to let unskilled users make pornographic images. Some of those users are children themselves.
In our paper, “AI-Generated Child Sexual Abuse Material: Insights from Educators, Platforms, Law Enforcement, Legislators, and Victims,” we assess how several stakeholder groups are thinking about and responding to AI CSAM. Through 52 interviews conducted between mid-2024 and early 2025 and a review of documents from four public school districts, we find that the prevalence of AI CSAM in schools remains unclear but appears to be not overwhelmingly high at present. Schools therefore have a chance to proactively prepare their AI CSAM prevention and response strategies.
The AI CSAM phenomenon is testing the existing legal regimes that govern various affected sectors of society, illuminating some gaps and ambiguities. While legislators in Congress and around the United States have taken action in recent years to address some aspects of the AI CSAM problem, opportunities for further regulation or clarification remain. In particular, there is a need for policymakers at the state level to decide what to do about children who create and disseminate AI CSAM of other children, and, relatedly, to elucidate schools’ obligations with respect to such incidents.
The AI CSAM Problem
AI image generation models are abused to create CSAM in several ways. Some AI-generated imagery depicts children who do not exist in real life, though the AI models used to create such material are commonly trained on actual abuse imagery. Another type of AI-generated CSAM involves real, identifiable children, such as known abuse victims from existing CSAM series, famous children (e.g., actors or influencers), or a child known to the person who generated the image. AI tools are used to modify an innocuous image of the child to appear as though the child is engaged in sexually explicit conduct. This type of CSAM is commonly referred to as “morphed” imagery.
The difficulty of making AI CSAM varies. Many mainstream generative AI platforms have committed to combating the abuse of their services for CSAM purposes. Creating bespoke AI-generated imagery depicting a specific child sex abuse scenario thus still entails some amount of technical know-how, such as prompt engineering or fine-tuning open-source models. By contrast, nudify apps, which are trained on datasets of pornographic imagery, take an uploaded photo of a clothed person (either snapped by the perpetrator, or sourced from a social media account, school website, etc.) and quickly return a realistic-looking but fake nude image.
Nudify apps enable those with no particular skills in AI or computer graphics to create so-called “deepfake nudes” or “deepfake porn”—rapidly and potentially of numerous individuals at scale, and typically without the depicted person’s consent. What’s more, nudify apps do not consistently prohibit the upload of images of underage individuals either in their terms of service or in practice. Their ease of use and lack of restrictions have made them an avenue for the expeditious creation of AI CSAM—including by users who are themselves underage.
Beginning in mid-2023, male students at several U.S. middle and high schools (both public and private) reportedly used AI to make deepfake nudes of their female classmates. A high-profile case in New Jersey was followed by widely reported incidents in Texas, Washington, Florida, Pennsylvania, and multiple incidents in Southern California. More recently, media have reported on additional incidents in Pennsylvania, Florida, and Iowa. There have also been several reported occurrences internationally since 2023.
It is not clear what these cases imply about the prevalence of student-on-student incidents involving deepfake nudes. That they are so spread out geographically could be read to indicate a widespread problem in schools. On the other hand, the number of reported incidents nationwide remains minuscule for a country with over 54 million schoolchildren.







