Preparing for the Age of Deepfakes and Disinformation

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.
Key Takeaways
Generative Adversarial Networks (GANs) produce synthetic content by training algorithms against each other. They have beneficial applications in sectors ranging from fashion and entertainment to healthcare and transportation, but they can also produce media capable of fooling the best digital forensic tools.
We argue that creators of fake content are likely to maintain the upper hand over those investigating it, so new policy interventions will be needed to distinguish real human behavior from malicious synthetic content.
Policymakers need to think comprehensively about the actors involved and establish robust norms, regulations, and laws to meet the challenge of deepfakes and AI-enhanced disinformation.
Executive Summary
Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.
In our paper “How Relevant is the Turing Test in the Age of Sophisbots,” we argue that society is on the brink of an AI-driven technology that can simulate many of the most important hallmarks of human behavior. As the variety and scale of these so called “deepfakes” expands, they will likely be able to simulate human behavior so effectively and they will operate in such a dynamic manner that they will increasingly pass Turing’s test.
The issue for policymakers is how to identify the right tools to reveal the use of such generative technology and how to develop the right regulatory framework to mitigate their negative impact. Regulators should be conversant in the latest technical developments but they must also take steps to address the threat of malicious actors by fitting technologies in question into broader regulatory structures, adopting legislative incentives for platforms to responsibly develop these powerful algorithms, and hold malicious actors accountable for harmful behavior.







