Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Preparing for the Age of Deepfakes and Disinformation | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Preparing for the Age of Deepfakes and Disinformation

Date
November 01, 2020
Topics
Communications, Media
Privacy, Safety, Security
Read Paper
abstract

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.

Key Takeaways

  • Generative Adversarial Networks (GANs) produce synthetic content by training algorithms against each other. They have beneficial applications in sectors ranging from fashion and entertainment to healthcare and transportation, but they can also produce media capable of fooling the best digital forensic tools.

  • We argue that creators of fake content are likely to maintain the upper hand over those investigating it, so new policy interventions will be needed to distinguish real human behavior from malicious synthetic content.

  • Policymakers need to think comprehensively about the actors involved and establish robust norms, regulations, and laws to meet the challenge of deepfakes and AI-enhanced disinformation.

Executive Summary

Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.

In our paper “How Relevant is the Turing Test in the Age of Sophisbots,” we argue that society is on the brink of an AI-driven technology that can simulate many of the most important hallmarks of human behavior. As the variety and scale of these so called “deepfakes” expands, they will likely be able to simulate human behavior so effectively and they will operate in such a dynamic manner that they will increasingly pass Turing’s test.

The issue for policymakers is how to identify the right tools to reveal the use of such generative technology and how to develop the right regulatory framework to mitigate their negative impact. Regulators should be conversant in the latest technical developments but they must also take steps to address the threat of malicious actors by fitting technologies in question into broader regulatory structures, adopting legislative incentives for platforms to responsibly develop these powerful algorithms, and hold malicious actors accountable for harmful behavior.

Read Paper
Share
Link copied to clipboard!
Authors
  • Dan Boneh
    Dan Boneh
  • Andrew J. Grotto
    Andrew J. Grotto
  • Patrick McDaniel
    Patrick McDaniel
  • Nicolas Papernot
    Nicolas Papernot

Related Publications

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee
Jennifer King
Quick ReadNov 18, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Testimony

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee

Jennifer King
Privacy, Safety, SecurityQuick ReadNov 18

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Validating Claims About AI: A Policymaker’s Guide
Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Quick ReadSep 24, 2025
Policy Brief

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Policy Brief

Validating Claims About AI: A Policymaker’s Guide

Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Foundation ModelsPrivacy, Safety, SecurityQuick ReadSep 24

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy
Riana Pfefferkorn
Quick ReadJul 21, 2025
Policy Brief

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Policy Brief

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Riana Pfefferkorn
Privacy, Safety, SecurityEducation, SkillsQuick ReadJul 21

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Issue Brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Regulation, Policy, GovernancePrivacy, Safety, SecurityQuick ReadJun 30

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.