Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
How Do We Protect Children in the Age of AI? | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

How Do We Protect Children in the Age of AI?

Date
September 08, 2025
Topics
Ethics, Equity, Inclusion
Privacy, Safety, Security

Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.

As students return to classrooms this fall, many teachers are concerned about emerging AI tools getting in the way of learning. But a more worrisome AI trend is developing: Older kids are beginning to use “undress” apps to create deepfake nudes of their peers. Following a few news stories of incidents in places like California and New Jersey, the prevalence of this phenomenon is unclear, but it seems not to have overwhelmed schools just yet. That means now is the time for parents and schools to proactively plan to prevent and respond to this degrading and illegal use of AI. 

HAI Policy Fellow Riana Pfefferkorn studies the proliferation and impact of AI-generated child sexual abuse material. In a May 2025 report, she and co-authors Shelby Grossman and Sunny Liu gathered insights from educators, platforms, law enforcement, legislators, and victims to assess the extent of the problem and how schools are handling the emerging risk. 

“Although it’s early days and we don’t have an accurate view of how widespread the problem may be, most schools are not yet addressing the risks of AI-generated child sexual abuse materials with their students. When schools do experience an incident, their responses often make it worse for the victims,” Pfefferkorn says.

Easy Access, Devastating Consequences

Prior research has established that the proliferation of child sexual abuse material is a growing problem in the age of AI. A 2023 study by Stanford scholars created awareness by looking at the implications of highly realistic explicit content produced by generative machine learning models. That same year, a follow-up report analyzed the presence of known child sexual abuse images in a popular dataset used for training AI models. Building on this work, Pfefferkorn and her colleagues wanted to understand how schools, platforms, and law enforcement are handling the latest threat to child safety.

Unlike past technologies that could be used for illegal purposes, so-called “nudify” or “undress” apps are purpose-built to let unskilled users make pornographic images using only a photo of a clothed person. You don’t need to know Photoshop or be a whiz at training an open-source AI model to create believable images that can emotionally traumatize the person depicted and damage their reputation. Kids can stumble across these tools through app stores, search engines, and ads on social media, and while they may not conceive of their conduct as cyberbullying or illegal child pornography, it has devastating consequences nonetheless. 

“These apps do away with all the work previously required to create child sexual abuse material, so it’s shockingly easy for students to discover and use these tools against each other,” Pfefferkorn explains. 

Mitigating the Damage

Communities have a few ways to mitigate the harm of deepfake nudes. Federal law requires technology platforms to report and remove child sexual abuse material when they find it on their services, whether it’s real or AI-generated, and companies appear to be complying, according to the Stanford report. Plus, a new federal law will soon require platforms to remove, upon the victim’s request, nudes (whether real or deepfake) that have been nonconsensually posted online. Victims also can take legal action against the person who created or shared deepfake nude images; however, the criminal justice system is unprepared for child offenders, and the Stanford report questions whether criminal consequences for children are appropriate. 

Schools have recourse, too, as outlined in a recent HAI Policy Brief, Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy. They can suspend or expel perpetrators or refer them to a restorative justice program. But to date, few academic institutions appear to have established policies for managing this new type of risk. Pfefferkorn’s report finds that schools caught unprepared for a deepfake nude incident may misunderstand their legal obligations, and that a botched school response may exacerbate the victim’s suffering and undermine community trust.

Against this backdrop, Pfefferkorn concludes that the best way to stop deepfake nudes is prevention, not reaction. This means parents must get involved. She recommends they follow these safety tips:

  • Teach children about consent and that respecting bodily autonomy includes images of people, even synthetically created images of one’s likeness. Kids need to understand that “nudifying” someone’s picture isn’t funny; it’s harmful and can get them in big trouble. 

  • Encourage students to speak up if they see this behavior happening to or around them. If a child doesn’t feel comfortable speaking directly to an adult or fears they will be blamed, many schools already have anonymous tip lines for students to alert authorities of cyberbullying behavior. 

  • Think twice about sharing images of your kids on social media, where bad actors can find and manipulate them with undress tools. Use photo editing tools to cover your child’s face with an emoji, for example. 

  • Ask school administrators what steps they are taking to raise awareness and mitigate the harmful effects of AI-generated child sexual abuse material. Schools will take action if enough parents voice their concerns.

Though we can’t erase nudify apps from the internet forever, with a combination of preventive childrearing, school messaging, and regulation, we can reduce the likelihood of a young person discovering and using these tools. To all parties with a role to play in protecting child safety, Pfefferkorn says, “Let’s not normalize this behavior.”

Learn more about HAI Policy Fellow Riana Pfefferkorn's work understanding the proliferation and impact of AI-generated child sexual abuse material during this upcoming HAI seminar.

Share
Link copied to clipboard!
Contributor(s)
Nikki Goth Itoi
Related
  • Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps
    seminarDec 03, 202512:00 PM - 1:15 PM
    December
    03
    2025

    AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.


  • Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties
    Shana Lynch
    Sep 24
    news

    Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

Related News

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

AI For Good: What Does It Mean Today?
Forbes
Jan 23, 2026
Media Mention

HAI Co-Director James Landay urges people to think about what "AI for good" means today. He argues, "we need to move beyond just thinking about the user. We’ve got to think about broader communities who are impacted by AI systems if we actually want them to be good.”

Media Mention
Your browser does not support the video tag.

AI For Good: What Does It Mean Today?

Forbes
Ethics, Equity, InclusionJan 23

HAI Co-Director James Landay urges people to think about what "AI for good" means today. He argues, "we need to move beyond just thinking about the user. We’ve got to think about broader communities who are impacted by AI systems if we actually want them to be good.”

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?”