Amy Zegart has spent much of her career studying the interplay between new technology and intelligence gathering.
A senior fellow at Stanford’s Freeman Spogli Institute for International Studies and Hoover Institution, as well as chair of the HAI Steering Committee on International Security, she has written extensively about how artificial intelligence and torrents of new open-source information are upending traditional spycraft.
In the new “AI era,” she says, sophisticated intelligence can come from almost anywhere — armchair researchers, private technology companies, commercial satellites and ordinary citizens who livestream on Facebook.
Zegart served on a task force of top experts who warned that the U.S. intelligence community — a collection of 18 spy agencies across the government — “enters 2021 flatly behind the technology curve.”
Published earlier this year by the Center for Strategic and International Studies, the report warned that U.S. intelligence agencies have become “risk-averse” and too wedded to traditional espionage. It urged the agencies to make far more use of publicly accessible open-source data and to use AI in making sense out of it.
“The primary obstacle to intelligence innovation is not technology, it is culture,’’ the task force wrote. “Many of these problems stem from an [intelligence community] culture that is resistant to change, reliant on traditional tradecraft and — ironically, given the popular perception — averse to risk-taking, particularly to acquiring and adopting new technologies.”
We recently spoke with Zegart about those ideas and about the melding of artificial intelligence with espionage. The interview below has been edited for brevity.
How are artificial intelligence and the fire hose of incoming data upending the traditional business of intelligence gathering and analysis?
New technologies are driving what I call the “Five Mores” — five things that are changing the intelligence business in dramatic ways. The first is more threats, more types of nefarious actors who can threaten across vast geographic distances in cyberspace. From the dawn of history until the invention of the internet in the 1960s, two things provided security: power and geography. That’s no longer true. In cyberspace, anyone can threaten across borders without firing a shot because good and bad neighborhoods are all connected online. There are no oceans or mountain ranges protecting us. At the same time, power isn’t what it used to be. The U.S. is the most powerful actor in cyberspace and also the most vulnerable actor in cyberspace because we are so digitally connected. The result is that American intelligence officials have to understand and anticipate a wide array of threats from weak countries and non-state actors, not just powerful countries like Russia and China.
The second “more” is data. Thanks to new technologies, the amount of data on Earth is doubling every 24 months. It’s an astounding amount, and much of it is from open sources that are publicly available. It used to be that intelligence agencies had to hunt for secrets, but now they’re drowning in data.
Traditionally, intelligence reports started with clandestine material and then sprinkled open-source information on top. Some people, and I count myself among them, argue it should now be the opposite: Start with the open-source intelligence and then see how it fits with what comes from clandestine sources. And the key to using open-source intelligence is AI.
The third “more” is more speed: Information is traveling at greater speeds, decision-making is at greater speeds, and we need intelligence insights much faster. During the Cuban missile crisis of 1962, President Kennedy had 13 days to deliberate in secret about what he would do after U-2 spy planes discovered Soviet missiles in Cuba. On 9/11, President George W. Bush had just 13 hours to weigh intelligence about who was responsible for that horrific attack and how the U.S. would respond. Today, decision time could be 13 minutes or less.
The fourth “more” is the expanding number of decision makers who need intelligence. Who counts as a decision maker today? It’s not just people with security clearances. It’s tech company leaders. It’s Twitter and Facebook and other companies that exercise more global influence than most governments. It’s voters getting public service announcements about foreign election interference. So the intelligence community needs to think about how it produces analysis for all these other decision makers outside of the U.S. government.
The fifth “more” is more competition, more competitors in the collection and analysis of intelligence. Intelligence is anybody’s business now. One example I like to use was the raid on Osama Bin Laden. The Pakistani military didn’t see U.S. forces coming, but a local guy heard the helicopters and was live-tweeting the whole time it was happening. Anybody can be an intelligence collector or analyst today whether realizing it or not. One challenge for intelligence agencies is figuring out how to harness the insights from this open-source world.
Is traditional human intelligence gathering — using spies and clandestine operations to pry loose secrets — becoming irrelevant?
Human intelligence will always be important, but machine learning can free up humans for tasks that they’re better at. Satellites and AI algorithms are good at counting the number of trucks on a bridge, but they can’t tell you what those trucks mean. You need humans to figure out the wishes, intentions, and desires of others. The less time that human analysts spend counting trucks on a bridge, the more time they will have to figure out what those trucks are doing and why.
There is a vast amount of open-source data, but you need artificial intelligence to sift through it. So imagine a new intelligence cycle where you begin by using open-source information to surface key issues, and then get human sources to dig deeper into them.
A lot of human analytical work right now is on mundane tasks that could be automated by artificial intelligence. Think about how much time it takes for human analysts to locate Chinese surface-to-air missiles over its huge territory. An algorithm for analyzing satellite images can reduce the number of suspect sites, which frees up bandwidth for humans to do higher-level analytical thinking.
One of the most intriguing ideas that the task force came up with is to have AI “red cells,” or teams that use open-source information and AI and compete against human analysts. The idea is that red cells would help scrub human assumptions and sharpen thinking by surfacing alternative pieces of information or hypotheses. I might have my analysis, based on clandestine sources, saying what I think the Russians are doing. But now I have my analysis put out to a competing team, which uses only open-source intelligence and AI, and it now troubleshoots my analysis or comes up with a different view. You get a richer competition of ideas, which should lead to better products.
Your report says the intelligence community is surprisingly risk-averse and resistant to change. It says the biggest obstacle to innovation isn’t technology but culture. Can you explain?
“Culture” is a big category. It includes institutional history, all the hidden ways about “how we’ve done things around here.” Culture is shaped by capabilities that may have provided many benefits in the past, like those exquisite satellites that cost billions — it’s hard to get away from those. As one former intelligence officer put it, the sense is if a piece of information costs a trillion dollars to get, it must be worth a trillion dollars.
Culture also includes incentives. What are we rewarded for doing? You’re promoted if you do certain things in certain ways. You’re hired based on talents that may be less relevant today. A classic example is information sharing. Intelligence officials are notorious for keeping information “close-hold” rather than sharing it. There are good reasons why. But that “need to know” culture can also be debilitating for doing things like tracking suspected al Qaeda terrorists before 9/11. It’s not that intelligence officials don’t want to do the right thing, but there are powerful forces that cause agencies to resist change even when they know it’s needed.
Are you suggesting we don’t need those billion-dollar surveillance satellites anymore?
No, but I am saying there is a skyrocketing number of commercial satellites that already provide valuable intelligence insights. Earlier this year, SpaceX launched 143 small satellites in one launch, setting a world record. Increasingly, commercial satellites are offering better resolutions — they can detect manhole covers and different models of cars from space.
And because there are constellations of these small satellites, some of which are the size of a shoebox, the “revisit” rate is much faster. They can fly over the same place on Earth many times a day, which means you can see changes happening on the ground. You can detect changes in traffic patterns, or what’s going in and out of a port.
In fact, there is a whole ecosystem of non-government nuclear threat analysts who do incredible work and use only commercially available imagery and machine learning tools.
Here’s a great example. On July 2 last year, a weather satellite picked up a fire in Iran at what looked like a construction shed. That image got onto Twitter. Within a few hours, two non-government nuclear analysts, one in Washington, D.C., and one in California, concluded the fire had actually been an explosion at Iran’s heavily guarded centrifuge facility in Natanz, which enriches uranium. It was worldwide news by that afternoon. By that night, Israeli Prime Minister Netanyahu was being asked whether Israel had sabotaged the plant. All of this came from open-source non-governmental intelligence — and it all happened in one day!
Your report mentions the challenges of disinformation, that foreign adversaries will increasingly be able to flood U.S. intelligence with, if you will, fake intelligence. It could become very hard for intelligence analysts to tell the difference between legitimate information and misdirection.
Wherever there’s information, there’s going to be deception. The more powerful the data, the more it’s going to come under attack. That’s the world we live in today. So data is a new battleground. I think that’s increasingly going to require the intelligence community to be a verifier of last resort. How do we know what’s true and what isn’t? We’re going to need experts in the intelligence community who know. We’ll need to have experts who can tell us about data poisoning. It’s a spy-versus-spy battle.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.