How Do We Protect Children in the Age of AI?

Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.
As students return to classrooms this fall, many teachers are concerned about emerging AI tools getting in the way of learning. But a more worrisome AI trend is developing: Older kids are beginning to use “undress” apps to create deepfake nudes of their peers. Following a few news stories of incidents in places like California and New Jersey, the prevalence of this phenomenon is unclear, but it seems not to have overwhelmed schools just yet. That means now is the time for parents and schools to proactively plan to prevent and respond to this degrading and illegal use of AI.
HAI Policy Fellow Riana Pfefferkorn studies the proliferation and impact of AI-generated child sexual abuse material. In a May 2025 report, she and co-authors Shelby Grossman and Sunny Liu gathered insights from educators, platforms, law enforcement, legislators, and victims to assess the extent of the problem and how schools are handling the emerging risk.
“Although it’s early days and we don’t have an accurate view of how widespread the problem may be, most schools are not yet addressing the risks of AI-generated child sexual abuse materials with their students. When schools do experience an incident, their responses often make it worse for the victims,” Pfefferkorn says.
Easy Access, Devastating Consequences
Prior research has established that the proliferation of child sexual abuse material is a growing problem in the age of AI. A 2023 study by Stanford scholars created awareness by looking at the implications of highly realistic explicit content produced by generative machine learning models. That same year, a follow-up report analyzed the presence of known child sexual abuse images in a popular dataset used for training AI models. Building on this work, Pfefferkorn and her colleagues wanted to understand how schools, platforms, and law enforcement are handling the latest threat to child safety.
Unlike past technologies that could be used for illegal purposes, so-called “nudify” or “undress” apps are purpose-built to let unskilled users make pornographic images using only a photo of a clothed person. You don’t need to know Photoshop or be a whiz at training an open-source AI model to create believable images that can emotionally traumatize the person depicted and damage their reputation. Kids can stumble across these tools through app stores, search engines, and ads on social media, and while they may not conceive of their conduct as cyberbullying or illegal child pornography, it has devastating consequences nonetheless.
“These apps do away with all the work previously required to create child sexual abuse material, so it’s shockingly easy for students to discover and use these tools against each other,” Pfefferkorn explains.
Mitigating the Damage
Communities have a few ways to mitigate the harm of deepfake nudes. Federal law requires technology platforms to report and remove child sexual abuse material when they find it on their services, whether it’s real or AI-generated, and companies appear to be complying, according to the Stanford report. Plus, a new federal law will soon require platforms to remove, upon the victim’s request, nudes (whether real or deepfake) that have been nonconsensually posted online. Victims also can take legal action against the person who created or shared deepfake nude images; however, the criminal justice system is unprepared for child offenders, and the Stanford report questions whether criminal consequences for children are appropriate.
Schools have recourse, too, as outlined in a recent HAI Policy Brief, Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy. They can suspend or expel perpetrators or refer them to a restorative justice program. But to date, few academic institutions appear to have established policies for managing this new type of risk. Pfefferkorn’s report finds that schools caught unprepared for a deepfake nude incident may misunderstand their legal obligations, and that a botched school response may exacerbate the victim’s suffering and undermine community trust.
Against this backdrop, Pfefferkorn concludes that the best way to stop deepfake nudes is prevention, not reaction. This means parents must get involved. She recommends they follow these safety tips:
Teach children about consent and that respecting bodily autonomy includes images of people, even synthetically created images of one’s likeness. Kids need to understand that “nudifying” someone’s picture isn’t funny; it’s harmful and can get them in big trouble.
Encourage students to speak up if they see this behavior happening to or around them. If a child doesn’t feel comfortable speaking directly to an adult or fears they will be blamed, many schools already have anonymous tip lines for students to alert authorities of cyberbullying behavior.
Think twice about sharing images of your kids on social media, where bad actors can find and manipulate them with undress tools. Use photo editing tools to cover your child’s face with an emoji, for example.
Ask school administrators what steps they are taking to raise awareness and mitigate the harmful effects of AI-generated child sexual abuse material. Schools will take action if enough parents voice their concerns.
Though we can’t erase nudify apps from the internet forever, with a combination of preventive childrearing, school messaging, and regulation, we can reduce the likelihood of a young person discovering and using these tools. To all parties with a role to play in protecting child safety, Pfefferkorn says, “Let’s not normalize this behavior.”
Learn more about HAI Policy Fellow Riana Pfefferkorn's work understanding the proliferation and impact of AI-generated child sexual abuse material during this upcoming HAI seminar.

