Expert Roundtable: Five Tech Issues Facing the Next Administration
How might artificial intelligence and machine learning impact nuclear stability among the big powers? Is there an appetite in the United States for regulating the digital world? How should reporters cover disinformation campaigns? What can we do to better coordinate healthcare data nationwide?
Stanford experts addressed these questions and others at a recent media roundtable sponsored by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the Stanford Program on Geopolitics, Technology, and Governance, and attended by journalists from major publications and national media outlets including the New York Times, Reuters, The Economist, The Financial Times, and Wired, among others. Here are some of the takeaways.
AI and Nuclear Stability
The United States, China, and Russia are poised to integrate AI and machine learning into their military and national intelligence systems, says Colin Kahl, co-director of the Center for International Security and Cooperation at the Freeman Spogli Institute for International Studies at Stanford. “This has potentially profound implications for nuclear stability and the prospect for great power conflict,” he says.
The United States sees AI as a key part of its defense strategy, while China sees AI as a way to technologically leapfrog the U.S. while also expanding the military capabilities of the People’s Liberation Army. Meanwhile, Kahl says, Russia lags behind the U.S. and China in AI development, although Putin has said that “whoever becomes the leader in this sphere [AI] will become the ruler of the world.”
The concept of mutually assured destruction in which two sides in a nuclear war understand that they would annihilate one another if they were to deploy nuclear weapons, has effectively prevented nuclear war for decades, Kahl says. He is concerned that military applications of AI may ultimately undermine superpowers’ sense that any nuclear attack would prove suicidal. For example, AI and machine learning could make it easier to fuse data from increasingly ubiquitous sensors to produce what Kahl calls a “machine-readable battlefield.” Such a deep knowledge base might make it possible to undermine the enemy’s ability to respond to a first strike. Military communication systems may also be vulnerable to cyberattacks using AI, which could also prevent an enemy from responding to a first strike. And there are pressures in the U.S. to integrate AI into nuclear deployment systems themselves to assure prompt response to a nuclear attack—raising concerns about mistakes and accidents.
While these threats to nuclear stability are not imminent, Kahl says, “they are emerging, and we should think very seriously about them.”
Regulating the Digital World
Unlike the European Union, which has the General Data Protection Regulation (GDPR), the United States has shown a “lack of appetite to regulate for certain democratic principles when it comes to the digital world,” says Marietje Schaake, a Stanford HAI International Policy Fellow, international policy director at Stanford University’s Cyber Policy Center, and Dutch politician who served as a member of the European Parliament from 2009 to 2019. Should that change after the coming presidential election, she hopes the U.S. will base any new system of regulations on democratic ideals rather than addressing each innovation as it arrives. “A principle-based set of regulations will be more helpful and more sustainable, rather than running after the latest iteration of any kind of technology,” she says.
Such principles include nondiscrimination, fairness, privacy, transparency and access to information, the integrity of the democratic process, human rights, security, justice, accountability, and trust (including how to deal with disinformation). “These principles need regulations around them to be more resilient,” Schaake says.
Disinformation – and Responsibly Reporting It
Heading into the 2020 election, Andy Grotto, the William J. Perry International Security Fellow at Stanford’s Cyber Policy Center and a research fellow at the Hoover Institution, and Janine Zacharia, a visiting lecturer in the Department of Communication at Stanford University, prepared a report titled How to Responsibly Report on Hacks and Disinformation and a playbook itemizing 10 guidelines for newsrooms to follow. There’s even a flow chart to help newsrooms respond appropriately in the midst of a disinformation campaign being spread rapidly through social media.
“Once disinformation is spread so that it’s newsworthy, the question is how to report on it so that it doesn’t do the bad actor’s job for them in terms of spreading the false information,” Grotto says. “News organizations need a plan for dealing with this stuff. Waiting until the heat of the moment is too late.”
Their work sprang from initial conversations among a group of psychologists, political scientists, lawyers, communications experts, and others from across Stanford who came together to form the Information Warfare Working Group. It became clear that mainstream media organizations can still have a significant impact on people’s beliefs. “It’s not just a social media thing,” Grotto says. “There’s still this giant part of our information ecosystem that has a huge role to play here.”
Grotto and Zacharia took their proposal on the road to the New York Times, ABC, NPR, and others. “We haven’t yet encountered a news organization that had a plan in place,” Zacharia says. She and Grotto advised organizations to consider themselves a targeted adversary. “Think about whether you’re being tricked or used,” she says, “Amp up your skepticism. Build your organization’s muscle for determining the origin and nature of viral information.”
Coordinated Healthcare Data
“The onset of the COVID-19 pandemic has created a tension between medical record privacy and the need for an integrated public health surveillance system,” says Russ Altman, the Kenneth Fong Professor in the School of Engineering and a professor of bioengineering, genetics, medicine, and biomedical data science, and a Stanford HAI associate director. “For any rational leader who is trying to combat the pandemic, this has to be addressed.”
A new federal administration will be ethically obliged to figure out what to do to put the country in a better position for the next pandemic, be it COVID-21 or -23, Altman says. Integrating healthcare data nationwide may seem wise as a public health matter, he says, but will have important repercussions for patient privacy. Attempts to gather only the data needed for public health emergencies will be nearly impossible. “It’s very hard to isolate data for a single purpose,” Altman says. Moreover, the public health mission goes beyond COVID-19. Any new system would need to be designed to track other illnesses as well, such as flu and chronic disease. And where would it stop? “How far into the long tail of diseases should this public health infrastructure be applied?” Altman wonders. At what point are public health agencies going too far if they contact citizens with information about health problems that are not an emergency? “There is no coherent vision of how this should go,” he says.
Funding a National Research Cloud
The federal government plays a key role in funding research by universities, governmental agencies, and industry. But when the cost of research infrastructure is very high, the government occasionally steps in to provide that as well. For example, the federal government has funded large instruments for physics research, including the Stanford Linear Accelerator and similar tools at Fermilab. Now, HAI leaders are making the case for analogous federal funding for a National Research Cloud (NRC). It would offer computing resources and large datasets for use by universities all across the country. “AI research is getting to the point where the cost of computing and the need for very large datasets are putting certain types of research beyond the reach of individual universities,” says HAI Denning Family Co-Director John Etchemendy. “We think the government can provide that infrastructure in the form of the National Research Cloud.”
In addition to enabling certain avenues of AI research that require large-scale computing or large-scale datasets for training, the NRC would also democratize computing capability across the country, Etchemendy says. “We need to have students taught how to use advanced AI in Kansas, Nebraska, and Ohio,” he says. “The NRC would provide the computing support that is needed to make that possible.”
Stanford HAI’s mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.