HAI Policy Briefs
November 2020
Preparing for the Age of Deepfakes and Disinformation
Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.
Key Takeaways
➜ Generative Adversarial Networks (GANs) produce synthetic content by training algorithms against each other. They have beneficial applications in sectors ranging from fashion and entertainment to healthcare and transportation, but they can also produce media capable of fooling the best digital forensic tools.
➜ We argue that creators of fake content are likely to maintain the upper hand over those investigating it, so new policy interventions will be needed to distinguish real human behavior from malicious synthetic content.
➜ Policymakers need to think comprehensively about the actors involved and establish robust norms, regulations, and laws to meet the challenge of deepfakes and AIenhanced disinformation.
Authors
Dan Boneh- Stanford University
Andrew J. Grotto - Stanford University
Patrick McDaniel - Penn State University
Nicolas Papernot - University of Toronto