Skip to main content Skip to secondary navigation
Page Content

Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties

Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

Image
 Riana Pfefferkorn

Jeanina Matias

Riana Pfefferkorn joins Stanford HAI as a policy fellow focused on generative AI and online safety, as well as court evidence and trust. 

How can AI governance help protect people’s rights while mitigating AI’s harmful uses? Riana Pfefferkorn, a new policy fellow at the Stanford Institute for Human-Centered AI, studies this critical question. Since coming to Stanford in 2015, her research has covered a range of topics, including governments’ approaches to encryption and digital surveillance, generative AI and online safety, and court evidence and trust. 

Pfefferkorn, who joins Stanford HAI after stints at the Stanford Internet Observatory and Stanford Law’s Center for Internet and Society, brings a blend of legal expertise and commitment to the public interest: In prior roles she advised startups and represented major tech companies as an associate at the law firm of Wilson Sonsini Goodrich & Rosati and also clerked for a federal judge. 

Here she describes her most cited work, her plans at Stanford HAI, and what every policymaker should ask themselves before writing a new bill.

What will your role at Stanford HAI entail?

My role will continue to involve bringing law and policy analysis to social issues raised by emerging technologies. When I first started out at Stanford Law School, my focus was on encryption policy, which remains an issue, as well as cybersecurity and digital surveillance. Those topics are just as salient when AI is added into the mix. For example, I plan to explore the privacy implications of moving AI to on-device, particularly with respect to communications encryption. 

One of my key interests is understanding how AI might be leveraged for greater surveillance and how we can fend off more privacy-intrusive applications of AI. Additionally, my work at the Internet Observatory focused on the abusive uses of AI, particularly in the context of court evidence and child sex abuse material (CSAM). I want to explore how we can regulate AI to respect civil liberties while mitigating its negative uses.

Tell us about your background.

I don’t have a technical or computer science background, which I think helps me explain complex concepts to the general public. I trained as a lawyer with a focus on technology and civil liberties, and spent several years at Wilson Sonsini working on internet law, consumer privacy cases, and Section 230 issues. This experience has given me insight into both counseling and litigation, which is invaluable for understanding the implications of new technologies.

What are some of your notable achievements?

I was one of the first commentators to write about the coming impact of deepfakes on evidentiary proceedings in the courts. My 2020 law journal article has been widely cited and has helped judges and litigators prepare to handle deepfakes. Briefly, the courts have rules for authenticating evidence, the product of hundreds of years of people attempting to bring forgeries as evidence. We had the same issue with Photoshop in the ‘90s. It's just a new flavor of a very old problem. So my argument is that we already have a framework for how to deal with this phenomenon and don’t need to change the authentication rules in light of new technologies for making realistic deepfakes. My 2020 article also predicted that there would be a big temptation (known as “the liar’s dividend”) to claim that real but very damning evidence is fake. That has already happened in a wrongful-death case against Tesla and in one of the January 6 cases.

Another significant and timely work was my paper on AI-generated CSAM, published in February. It has reached a broad audience, including folks at the Department of Justice, the Federal Trade Commission, and the White House Office of Science and Technology Policy. In that paper, I predicted that prosecutors would use federal obscenity law to prosecute people who create this material. That has already come to pass in the federal indictment of a Wisconsin man in May.

At the high level, how should policymakers be thinking of regulating these technologies?

This depends on the context of the work. When it comes to CSAM issues, it’s useful to give them an analysis of constitutional constraints, given America’s robust free speech protections: what is feasible for regulating in this space, what are the existing laws on the books. The first question should always be, Is there an existing law that can be used in this field? Do you really need a new law or can you use the law as it exists today? I believe in “future-proofing” statutes and regulations by writing them in general (yet clear!) language, so that they can be applied flexibly to new technologies that haven’t even been invented yet.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics