Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties

Date
September 24, 2024
Topics
Law Enforcement and Justice
Privacy, Safety, Security
Regulation, Policy, Governance

Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

How can AI governance help protect people’s rights while mitigating AI’s harmful uses? Riana Pfefferkorn, a new policy fellow at the Stanford Institute for Human-Centered AI, studies this critical question. Since coming to Stanford in 2015, her research has covered a range of topics, including governments’ approaches to encryption and digital surveillance, generative AI and online safety, and court evidence and trust. 

Pfefferkorn, who joins Stanford HAI after stints at the Stanford Internet Observatory and Stanford Law’s Center for Internet and Society, brings a blend of legal expertise and commitment to the public interest: In prior roles she advised startups and represented major tech companies as an associate at the law firm of Wilson Sonsini Goodrich & Rosati and also clerked for a federal judge. 

Here she describes her most cited work, her plans at Stanford HAI, and what every policymaker should ask themselves before writing a new bill.

What will your role at Stanford HAI entail?

My role will continue to involve bringing law and policy analysis to social issues raised by emerging technologies. When I first started out at Stanford Law School, my focus was on encryption policy, which remains an issue, as well as cybersecurity and digital surveillance. Those topics are just as salient when AI is added into the mix. For example, I plan to explore the privacy implications of moving AI to on-device, particularly with respect to communications encryption. 

One of my key interests is understanding how AI might be leveraged for greater surveillance and how we can fend off more privacy-intrusive applications of AI. Additionally, my work at the Internet Observatory focused on the abusive uses of AI, particularly in the context of court evidence and child sex abuse material (CSAM). I want to explore how we can regulate AI to respect civil liberties while mitigating its negative uses.

Tell us about your background.

I don’t have a technical or computer science background, which I think helps me explain complex concepts to the general public. I trained as a lawyer with a focus on technology and civil liberties, and spent several years at Wilson Sonsini working on internet law, consumer privacy cases, and Section 230 issues. This experience has given me insight into both counseling and litigation, which is invaluable for understanding the implications of new technologies.

What are some of your notable achievements?

I was one of the first commentators to write about the coming impact of deepfakes on evidentiary proceedings in the courts. My 2020 law journal article has been widely cited and has helped judges and litigators prepare to handle deepfakes. Briefly, the courts have rules for authenticating evidence, the product of hundreds of years of people attempting to bring forgeries as evidence. We had the same issue with Photoshop in the ‘90s. It's just a new flavor of a very old problem. So my argument is that we already have a framework for how to deal with this phenomenon and don’t need to change the authentication rules in light of new technologies for making realistic deepfakes. My 2020 article also predicted that there would be a big temptation (known as “the liar’s dividend”) to claim that real but very damning evidence is fake. That has already happened in a wrongful-death case against Tesla and in one of the January 6 cases.

Another significant and timely work was my paper on AI-generated CSAM, published in February. It has reached a broad audience, including folks at the Department of Justice, the Federal Trade Commission, and the White House Office of Science and Technology Policy. In that paper, I predicted that prosecutors would use federal obscenity law to prosecute people who create this material. That has already come to pass in the federal indictment of a Wisconsin man in May.

At the high level, how should policymakers be thinking of regulating these technologies?

This depends on the context of the work. When it comes to CSAM issues, it’s useful to give them an analysis of constitutional constraints, given America’s robust free speech protections: what is feasible for regulating in this space, what are the existing laws on the books. The first question should always be, Is there an existing law that can be used in this field? Do you really need a new law or can you use the law as it exists today? I believe in “future-proofing” statutes and regulations by writing them in general (yet clear!) language, so that they can be applied flexibly to new technologies that haven’t even been invented yet.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch

Related News

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images
PBS NewsHour
Jan 16, 2026
Media Mention

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Media Mention
Your browser does not support the video tag.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images

PBS NewsHour
Privacy, Safety, SecurityRegulation, Policy, GovernanceEthics, Equity, InclusionJan 16

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Translating Centralized AI Principles Into Localized Practice
Dylan Walsh
Jan 13, 2026
News
Pedestrians walk by a Louis Vuitton store

Scholars develop a framework in collaboration with luxury goods multinational LVMH that lays out how large companies can flexibly deploy principles on the responsible use of AI across business units worldwide.

News
Pedestrians walk by a Louis Vuitton store

Translating Centralized AI Principles Into Localized Practice

Dylan Walsh
Ethics, Equity, InclusionRegulation, Policy, GovernanceJan 13

Scholars develop a framework in collaboration with luxury goods multinational LVMH that lays out how large companies can flexibly deploy principles on the responsible use of AI across business units worldwide.