Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Riana Pfefferkorn | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peoplePolicy Fellow

Riana Pfefferkorn

Policy Fellow, Stanford HAI

Riana Pfefferkorn

Riana Pfefferkorn is a policy fellow at Stanford HAI. A lawyer by training, Riana researches the law and policy implications of emerging technologies including AI. Her research spans topics in privacy and civil liberties, encryption policy, digital surveillance, cybersecurity, and online trust and safety. Her past work includes analyzing the legal implications and real-world impact of AI-generated child abuse material, predicting the impact of “deepfakes” on evidentiary proceedings in court, studying the system for reporting child exploitation online, and surveying online platforms’ use of “content-oblivious” trust and safety techniques, among other topics. Along with her HAI colleague Dr. Jennifer King, Riana is a 2026-2027 Non-Resident Fellow at the Center for Democracy and Technology.

Riana has trained congressional staffers and state-court judges on AI-related issues, testified to a committee of the Australian Parliament, and spoken at various legal and cybersecurity conferences, including Black Hat and DEF CON’s Crypto & Privacy Village. She appears frequently in the press, including the Washington Post, CNN, and NPR, and has written for publications including the New York Times, Scientific American, the Boston Review, Brookings, Lawfare, Tech Policy Press, and Just Security. A list of additional publications and other work product is available here.

Before joining HAI in the summer of 2024, Riana was a research scholar at the Stanford Internet Observatory, and before that, the Associate Director of Surveillance and Cybersecurity at the Stanford Center for Internet and Society, where she remains an affiliate (and occasionally blogs). Prior to joining Stanford in 2015, Riana was an associate in the Internet Strategy & Litigation group at the law firm of Wilson Sonsini Goodrich & Rosati, and a law clerk to the Honorable Bruce J. McGiverin of the U.S. District Court for the District of Puerto Rico. During law school, she interned for the Honorable Stephen Reinhardt of the U.S. Court of Appeals for the Ninth Circuit. Riana is a graduate of the University of Washington School of Law and Whitman College, and a member of the California and Washington State bars.

Share
Link copied to clipboard!

Latest Related to Riana Pfefferkorn

media mention
Your browser does not support the video tag.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images

PBS NewsHour
Privacy, Safety, SecurityRegulation, Policy, GovernanceEthics, Equity, InclusionJan 16

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

media mention
Your browser does not support the video tag.

There’s One Easy Solution To The A.I. Porn Problem

The New York Times
Regulation, Policy, GovernanceGenerative AIJan 12

Riana Pfefferkorn, Policy Fellow at HAI, urges immediate Congressional hearings to scope a legal safe harbor for AI-generated child sexual abuse materials following a recent scandal with Grok's newest generative image features.

media mention
Your browser does not support the video tag.

The Policy Implications Of Grok's 'Mass Digital Undressing Spree'

Tech Policy Press
Regulation, Policy, GovernanceGenerative AIJan 08

HAI Policy Fellow Riana Pfefferkorn discusses the policy implications of the "mass digital undressing spree,” where the chatbot Grok responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X.

All Related

AI-Faked Cases Become Core Issue Irritating Overworked Judges
Bloomberg Law
Dec 29, 2025
media mention

As AI-hallucinated case citations flood the courts, judges have increased fines for attorneys who have cited fake cases. HAI Policy Fellow Riana Pfefferkorn hopes this will "make the firm sit up and pay better attention."

AI-Faked Cases Become Core Issue Irritating Overworked Judges

Bloomberg Law
Dec 29, 2025

As AI-hallucinated case citations flood the courts, judges have increased fines for attorneys who have cited fake cases. HAI Policy Fellow Riana Pfefferkorn hopes this will "make the firm sit up and pay better attention."

Generative AI
Law Enforcement and Justice
media mention
Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps
seminarDec 03, 202512:00 PM - 1:15 PM
December
03
2025

AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.


December
03
2025

Riana Pfefferkorn | Student Misuse of AI-Powered “Undress” Apps

Dec 03, 202512:00 PM - 1:15 PM

AI-generated child sexual abuse material (AI CSAM) carries unique harms. Schools have a chance to proactively prepare their AI CSAM prevention and response strategies.


Regulation, Policy, Governance
Privacy, Safety, Security
Generative AI
Our Racist, Terrifying Deepfake Future Is Here
Nature
Nov 03, 2025
media mention

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Our Racist, Terrifying Deepfake Future Is Here

Nature
Nov 03, 2025

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Generative AI
Regulation, Policy, Governance
Law Enforcement and Justice
media mention
How Congress Could Stifle The Onslaught Of AI-Generated Child Sexual Abuse Material
Tech Policy Press
Sep 25, 2025
media mention

HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.


How Congress Could Stifle The Onslaught Of AI-Generated Child Sexual Abuse Material

Tech Policy Press
Sep 25, 2025

HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.


Ethics, Equity, Inclusion
Privacy, Safety, Security
Regulation, Policy, Governance
media mention
The Trump FTC’s War On Porn Just Ensured That Accused CSAM Offenders Will Walk Free
Tech Dirt
Sep 15, 2025
media mention

Stanford HAI Policy Fellow Riana Pfefferkorn discusses the complexities of the FTC's settlement with Aylo regarding prosecuting CSAM offenders.

The Trump FTC’s War On Porn Just Ensured That Accused CSAM Offenders Will Walk Free

Tech Dirt
Sep 15, 2025

Stanford HAI Policy Fellow Riana Pfefferkorn discusses the complexities of the FTC's settlement with Aylo regarding prosecuting CSAM offenders.

Regulation, Policy, Governance
media mention
The FTC’s Settlement With Aylo: This Isn’t Really About Fighting CSAM And Revenge Porn
Tech Dirt
Sep 15, 2025
media mention

Stanford HAI Policy Fellow Riana Pfefferkorn explores the ramifications of the FTC's settlement with Aylo regarding CSAM and revenge porn.

The FTC’s Settlement With Aylo: This Isn’t Really About Fighting CSAM And Revenge Porn

Tech Dirt
Sep 15, 2025

Stanford HAI Policy Fellow Riana Pfefferkorn explores the ramifications of the FTC's settlement with Aylo regarding CSAM and revenge porn.

Regulation, Policy, Governance
media mention
How To Keep Your Private Messages Truly Private
CNN
Sep 09, 2025
media mention

HAI Policy Fellow Riana Pfefferkorn discusses scenarios when third parties might be able to access personal messaging data and how to keep those forms of digital communication private.

How To Keep Your Private Messages Truly Private

CNN
Sep 09, 2025

HAI Policy Fellow Riana Pfefferkorn discusses scenarios when third parties might be able to access personal messaging data and how to keep those forms of digital communication private.

Privacy, Safety, Security
media mention
The Age-Checked Internet Has Arrived
Wired
Jul 25, 2025
media mention

Stanford HAI Policy Fellow Riana Pfefferkorn speaks about the implications of laws related to age-checked access to the internet.

The Age-Checked Internet Has Arrived

Wired
Jul 25, 2025

Stanford HAI Policy Fellow Riana Pfefferkorn speaks about the implications of laws related to age-checked access to the internet.

Privacy, Safety, Security
media mention
Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy
Riana Pfefferkorn
Quick ReadJul 21, 2025
policy brief

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Riana Pfefferkorn
Quick ReadJul 21, 2025

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Privacy, Safety, Security
Education, Skills
policy brief
Signal Isn’t Infallible, Despite Being One Of The Most Secure Encrypted Chat Apps
NBC News
Mar 25, 2025
media mention

HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.

Signal Isn’t Infallible, Despite Being One Of The Most Secure Encrypted Chat Apps

NBC News
Mar 25, 2025

HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.

Privacy, Safety, Security
media mention
What Will Drive State AI Legislation In 2025?
Tech Policy Press
Jan 23, 2025
media mention

HAI Policy Fellow Riana Pfefferkorn gives insight into legislative session decisions in AI going into 2025. 

What Will Drive State AI Legislation In 2025?

Tech Policy Press
Jan 23, 2025

HAI Policy Fellow Riana Pfefferkorn gives insight into legislative session decisions in AI going into 2025. 

Government, Public Administration
media mention
Why AI Use In Child Sexual Abuse Material Might Be More Prevalent Than You Think
ABC10
Jan 16, 2025
media mention

HAI Policy Fellow Riana Pfefferkorn speaks about a new law in effect, criminalizing the creation, distribution, and possession of AI-generated child sexual abuse material.

Why AI Use In Child Sexual Abuse Material Might Be More Prevalent Than You Think

ABC10
Jan 16, 2025

HAI Policy Fellow Riana Pfefferkorn speaks about a new law in effect, criminalizing the creation, distribution, and possession of AI-generated child sexual abuse material.

Regulation, Policy, Governance
Government, Public Administration
Generative AI
media mention