Skip to main content Skip to secondary navigation
Page Content

Offering AI Expertise to Policymakers on Both Sides of the Atlantic

An AI intensive policy lab offered by HAI and the European University Institute will help policymakers envision a democratic model of tech governance.

Image
A photograph depicts the EU and US flags.

The new program will help regulators and policymakers understand how technology can impact issues ranging from antitrust law to human rights and disinformation. | Tobias Schwarz/Reuters

Democratic governments have been slow to set standards and rules around the deployment of digital technologies, says Marietje Schaake, international policy director at the Stanford Cyber Policy Center and International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Indeed, she says, there’s a vast void between China’s autocratic model of technology governance and the West’s tendency toward complete deference to privatized corporate power. The result: Democracy suffers.

“Policymakers in democratic nations need to play catch-up,” Schaake says. And that will require taking decisive and ambitious action to set principled rules of the road for all things digital. 

To that end, Schaake, who served for 10 years as a member of the European Union Parliament, and HAI have teamed up with Alexander Stubb, the former prime minister of Finland and the director of the School of Transnational Governance at the European University Institute (EUI), to launch a technology course designed specifically for policymakers and other public leaders.

“There’s a need to articulate a much more coherent policy vision. To say, this is what a democratic model of tech governance looks like,” Schaake says. “To turn the tide, we really need to begin with leadership.”

Read related story: When Artificial Agents Lie, Defame and Defraud, Who is to Blame?

 

To foster a cooperative vision of tech governance among democratic governments, the planned course will bring together 10 policymakers and public leaders from each side of the Atlantic. Schaake expects applicants from a variety of public policy areas – and they don’t need to be tech savvy. Class members might include antitrust regulators, FTC or FCC commissioners or their EU equivalents, or elected members of Congress or the EU Parliament who are interested in improving their understanding of AI and the impact of technology on democracy.

A Shared Trans-Atlantic Tech Agenda

To some extent, the EU has made greater strides toward addressing digital disruption than has the United States. The EU already has the General Data Protection Regulation (GDPR), which addresses consumer data privacy issues. (In the United States, only California has passed a similar law.) And the EU Parliament is currently weighing a tandem set of laws – the Digital Services Act, which mainly deals with platforms, liability, and content moderation; and the Digital Markets Act, which deals with questions of market scale and abuse of market power. Nothing similar is even in the legislative pipeline in the United States.

Despite these differences, Schaake says democracies globally share a common interest when it comes to digital disruption, including: redefining antitrust, protecting human rights, addressing national security, and handling mis- and dis-information (see sidebar). This should lead to a common, ambitious agenda.

Schaake and Stubb designed the policy lab to include both EU and US policymakers because there’s value in aligning tech policy across the EU and US. “These countries are economically and politically powerful,” she says. “They have a lot of leverage because companies want market access to their consumers.”

If the US and EU together set conditions for what companies can and cannot do in the digital sphere, with the goal of protecting citizens’ privacy, national security, and human rights, that will then help make democracy more robust in the face of digital disruptions, Schaake says. 

Read related story: Democracy and the Digital Transformation of Our Lives

 

Furthermore, having an integrated and unified approach can fill the gaps between policy and practice. All too often, differences among governmental rules are exploited by bad actors. For example, when Schaake was serving in the EU Parliament, various EU member states were making slightly different decisions about when to decline foreign direct investment on the basis of national security concerns. As a result, China invested in small companies that were producing discrete elements of strategically significant technologies in countries where they could get away with it. “Only after seeing the total picture did it become clear that this was a coordinated strategy to buy up certain important technologies,” she says. By achieving alignment within and between governments, the risk of such abuse is reduced.

Schaake says the policy lab will also encourage a more whole-of-government approach to governance generally, including governance of technology. Too often, trade, foreign policy, and human rights policy aren’t well aligned. For example, hacking and surveillance technology are being exported to exactly the regimes that are being condemned for spying on their journalists with US- or EU-made technologies. The result: Little or no progress is made on the human rights front. “There are huge gaps between what one arm of the government says and does and what the other arm of the government says and does,” Schaake says. Greater policy alignment around tech – both between and within governments – is needed. 

AI Intensive Policy Lab

The first iteration of the course, which is called “Artificial Intelligence: Intensive Online Policy Lab,” is designed for the pandemic era. It will be offered at no cost to participants and will take place online over three days (May 10-12). 

The program’s speakers will focus on emerging AI technologies and their risks as well as major policy arenas involving AI. For example, participants will discuss how regulators might want to address AI’s effect on cyberwarfare and cybersecurity, labor markets, and the concentration of power in a few large companies. Class members will also learn about best practices for dealing with emerging technologies in novel ways, including adaptive regulation. And they will discuss the various ways that AI threatens democracy and human rights.

Lecturers include Schaake; Stanford HAI Denning Co-Director Fei-Fei Li; Michael McFaul, director of the Freeman Spogli Institute for International Studies at Stanford and former United States ambassador to Russia; John Allen, the president of the Brookings Institution; members of the European Commission; Stubb; and others with expertise or relevant policy experience from the EUI School of Transnational Governance. 

Once the pandemic is behind us, Schaake says, the course will be taught over two weeks – one at Stanford and one in Florence, Italy, where the EUI is based. “The in-person experience should be even more beneficial because it will offer an informal setting where policymakers from across the Atlantic can discuss their shared concerns,” she says. 

The Will to Act

Schaake hopes that public leaders who participate in the policy lab will come away with a deeper understanding of how technological disruption impacts democracies and how tech governance can be built on democratic principles. In addition they will gain some sense of how to fill in the pieces of the policy puzzle in the areas of war and peace, competition and antitrust, human rights and the democratic process.

“If I could snap my fingers to address the challenges of digital disruption, I would give more ambition and drive back to democratic governments to make sure they were in the lead in setting standards and rules, applying oversight, and ensuring access to information in a way that they really don’t right now.”

The policy lab might not instantly yield that result, but it is a start.


Three Areas EU/US Lawmakers Should Align On

Western democracies’ shared tech agenda includes such hot topics as antitrust, human rights, and disinformation. Here’s a sampler of how the policy lab will touch on these issues.

Antitrust:

Historically, the harms from violations of antitrust laws have been measured in terms of whether the consumer paid too much. But when social media companies provide “free” email or social media accounts, consumers are actually paying with their data. Translating traditional antitrust regulations to this context is challenging. This issue carries over to larger questions about antitrust issues at the market level. For example, when Facebook bought Instagram and WhatsApp, it paid prices that exceeded the companies’ market valuations, which should have raised flags about anti-competitive motivations. But such motives weren’t recognized at the time because, from a traditional antitrust point of view, it seemed as if a social media platform was buying a video-sharing platform and a messaging app – companies with very different functions. But because Facebook puts all the data in one big pile, perhaps we should understand the market to be one of access to data, irrespective of where it comes from, Schaake says. “Understanding the value of data to companies becomes really important for evaluating how antitrust rules apply.”   

Human Rights:

Policymakers must translate standards of human rights to situations where those rights are at stake in the digital arena. For example, if we ban the purchase of goods produced using child labor because that is considered a human rights abuse, might we then also ban the purchase of AI systems that have been trained using data from people who have not consented to its use? And what position should democracies take regarding the development of authoritarian technology – for example, facial recognition systems in China that are trained to recognize Uighurs in order to deny their freedoms? “Just as we recognize that forced labor or torture tools violate human rights, I would say there has to be a vision regarding whether certain technologies violate human rights and, if so, whether there should be consequences for those violations,” Schaake says. 

Misinformation:

When it comes to online mis-and dis-information, Schaake says we need clear policies about the responsibilities powerful tech companies owe to society. There’s a need to articulate when harmful content becomes a problem. For example, anti-vaxxers have a free speech right to post incorrect information about vaccine safety. But if a large number of people are convinced by anti-vax lies, that can create a significant public health risk. “I would say that it’s not up to an advertising company, which is essentially what a social media company is, to decide when public health is at stake. Instead, that should be a democratically overseen decision, which currently it’s not,” Schaake says.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human conditionLearn more.

More News Topics

Related Content

HAI Logo extra dark plum background

Announcing the HAI Policy Brief Series

by John Etchemendy and Fei-Fei Li
September 24th, 2020

Dear HAI Community,  We started the Stanford Institute for Human-Centered Artificial Intelligence (HAI) because...