Skip to main content Skip to secondary navigation
Page Content

With so many cultural differences and competing motivations, how will we get the leaders of the world to agree on legal and ethical guidelines for artificial intelligence? Stanford HAI is hoping to spur those conversations.

Technology as powerful and diffuse as artificial intelligence will one day impact every part of the globe, and its inhabitants. While its development is still in relatively early stages, leaders gathered at Stanford University to share common ideals and competing views on why guiding the technology as it scales is as necessary as it is challenging. 

The Stanford Institute for Human-Centered Artificial Intelligence’s (HAI) 2019 Fall Conference on Ethics, Policy and Governance revealed what differentiates it from many other higher-education institutions: a steadfast focus on human-centric ideals and an open, interdisciplinary approach that equally values all fields, from the sciences to the arts. These tenets cropped up everywhere — from the questions the sessions tackled to the often-opposing panelists who shared the stage at Stanford’s Hoover Institution in late October. 

HAI launched in March 2019 with a clear mission: to advance AI research, education, policy, and practice to improve the human condition. It educates and serves a global community by hosting stakeholders across academia, civil society, government, and industry to discuss how this technology can be guided to serve the common good. The leading minds gathered at HAI’s fall conference may have had different takes on what the future should look like and how we should get there, but they all agreed that global guidelines that uphold human rights and dignity are imperative. 

The United States is currently crafting a national AI strategy, as have some 30 other nations. In a recent blog post, HAI co-directors Fei-Fei Li and John Etchemendy outlined why they believe the U.S. government should commit $120 billion in research, data and computing resources, education and startup capital over the next decade to support a bold American human-centered AI strategy to lead the world. 

The Defense Innovation Board chaired by former Google CEO Eric Schmidt approved on October 31st a proposal guiding the ethical use of AI within the Department of Defense. That guidance is crucial to harnessing the power of AI. Within the next decade, Schmidt said, this nascent technology will help us build powerful new materials, understand the climate in new ways and generate far more efficient energy — it could even cure cancer.

“This is all good,” he said. “And I don't want us, in these complicated debates about what we're doing, to forget that the scientists here at Stanford and other places are making progress on problems which were thought to be unsolvable… because [without AI] they couldn't do the math at scale.” 

Schmidt shared the stage with Marietje Schaake, a HAI International Policy Fellow and Dutch former member of the European Parliament who worked to pass the European Union’s General Data Protection Regulation (GDPR). She argued that AI’s potential shouldn’t obscure its potential harms, which the law can help mitigate. Large technology companies have a lot of power, Schaake said. “And with great power should come great responsibility, or at least modesty. Some of the outcomes of pattern recognition or machine learning are reason for such serious concerns that pauses are justified. I don't think that everything that's possible should also be put in the wild or into society as part of this often quoted ‘race for dominance.’ We need to actually answer the question, collectively, ‘How much risk are we willing to take?’”

Several speakers at the conference likened the surge of AI technology to that of nuclear weapons, which 43 states reined in with a global treaty 50 years ago. The so-called fourth industrial revolution, fueled by AI, will impact the world as we know it forever — and in ways that we can’t yet see. 

Eileen Donahoe, executive director of Stanford’s Global Digital Policy Incubator, suggested that a framework for global governance of AI should begin by building on a foundation most democratic nations have already agreed upon: long-held international human rights guidelines. Rooted in the United Nations’ historic Universal Declaration of Human Rights of 1948, the International Covenant on Civil and Political Rights, and its Guiding Principles on Business and Human Rights, they help define rights considered inherent in the human person and the responsibilities the private sector has to respect those rights. 

“There isn’t really any expectation that anytime soon there’ll be an international agreement on global [AI] governance,” Donahoe said. Though there is some overlap among national governments’ stated views on AI, no two countries share the same priorities. The multi-stakeholder realm appeared to be the promising middle ground. That’s where various leaders from the tech community, academia, civil society and governments are collaborating to craft normative principles for ethical AI. 

While companies battle for dominance in AI technology, a geopolitical storm is brewing over the values and norms that will regulate AI and guide digitized AI-driven societies. Those values, Donahoe said, will dictate whether we use AI to enhance human dignity and reinforce liberty, equality and security—or undermine human dignity, restrict freedom and move us toward digital authoritarianism.

Many pointed to China’s use of facial recognition surveillance as a clear and dangerous infringement of human and civil rights. Even so, Schmidt argued that limiting access to Chinese scientists and technology isn’t the answer. Open dialogue will help us all understand what’s possible and the challenges we face. 

Bias is one of the biggest. Without incorporating a diversity of perspectives at the design phase, AI models can’t help but exacerbate the natural, systemic bias of their very human (often white, wealthy and male) developers.

The potential for high-stakes mistakes have led a handful of local cities, including San Francisco, Oakland and Berkeley, to regulate AI by banning the use of facial recognition in law enforcement or more broadly by government. Europe’s GDPR protects people from being the subject of decisions based on automated processing alone if those decisions undermine their rights, Donahoe said. And the UN has spent three years focusing on how AI can help achieve the UN’s Sustainable Development Goals. 

Michael Kratsios, Chief Technology Officer of the United States, took another tack. In his view, the markets will decide the fate of AI and therefore, the world. His position is that the federal, state and local governments should loosen the bonds of regulation and enable American companies to ambitiously develop and scale AI-based products, because the technology that wins in the global marketplace will have the power to spread its ethics throughout the world. “If the U.S. can continue to be a leader in artificial intelligence, we can ensure that the values that we hold so dear are the ones that are going to be underpinning the development globally,” Kratsios said. 

Working with close Western allies to exchange research and promote each other’s technology will help. “If we do that, we're going to continue to win out and push back on some of these other folks who don't feel the same way we do.”

 

More News Topics