Skip to main content Skip to secondary navigation
Page Content

Where Generative AI Meets Human Rights

Experts in technology, law, and human rights debate the unique implications of this technology and how we might best direct its potential to benefit humanity.

Image
Concept image of an AI wearing a digitized human face. 3d rendering

iStock

In November 2022, OpenAI released ChatGPT. Less than 18 months later, the subject of generative AI dominates almost every sphere of life, public and private. Policymakers talk about it; economists talk about it; social scientists, parents, teachers, and investors talk about it.

Volker Türk—the United Nations High Commissioner for Human Rights—talks about it, too. “The unparalleled human rights impacts of advanced AI, including generative AI, are already being felt by vast numbers of people,” he said in a recent discussion on the subject. To ensure the benefits of AI flow to everyone, “people must be at the center of the technology.”

Türk offered these remarks as the keynote speaker at Human Rights Dimensions of Generative AI, a February 14 event hosted by the Center for Human Rights and International Justice and co-sponsored by the Stanford Institute for Human-Centered AI (HAI) and others. Following Türk’s comments, a panel of experts representing private and public sectors and academia discussed the implications of generative AI for human rights, democratic function, and general social cohesion. The group included:

  • Eileen Donahoe, Special Envoy & Coordinator for Digital Freedom in the U.S. Department of State’s Bureau of Cyberspace and Digital Policy;
  • Alex Walden, Global Head of Human Rights at Google;
  • Peggy Hicks, Director of the Thematic Engagement, Special Procedures and Right to Development Division of the UN Human Rights Office; 
  • Nate Persily, Co-Director of the Stanford Cyber Policy Center and James B. McClatchy Professor of Law at Stanford Law School; and
  • Raffi Krikorian, Chief Technology Officer at the Emerson Collective.

Below are a few highlights from the conversation.

Why AI Is Different

Though some of the concerns over AI are conventional—it can promote disinformation or invade privacy—there are several ways in which the challenges it presents are unfamiliar to the policy world.

For one, AI is what Persily calls a keystone technology. “It is becoming fundamental to all other technologies, which makes it different,” he said. “It is everywhere already and will continue to be everywhere. It is regulating the entire future of our economy and social relations.” It is also more opaque than other technologies, and its applications are far more difficult to foresee. As one example, Persily said that the team at OpenAI, when it released ChatGPT, didn’t imagine its use in coding. “More than half of uses are now for coding,” he said.

Google’s Walden pointed to the novel speed and scale at which AI functions, as well as its ability to learn and adapt. Though platforms like Google have long been in conversation with policymakers about how to regulate content, she said, “AI makes this all much more complicated.” 

Finally, many of the foundational algorithms in AI are open source. This is central to making the tools accessible beyond corporate IP, but it is also problematic. Within the first year of ChatGPT’s release, for instance, there was an explosion of AI-generated child pornography—a result of the fact that these tools were freely available.

“In some ways, the most democratically friendly aspect of this technology also poses the greatest risk,” Persily said.

A Few Grave Concerns

All the panelists voiced concern about how AI will be—and already is being—used; three of these concerns echoed across the conversation.

First, both Krikorian and Donahoe noted how the rapid evolution of different AI tools makes it virtually impossible for the public or policymakers to keep up. “There is a big discrepancy between the development and the absorption of this technology,” Krikorian said. “In many ways this just means we’re pouring gasoline on every other problem.” Before we’ve managed to tackle the growing issue of online disinformation, for instance, AI is accelerating its dissemination.

Second, Hicks noted that, though the UN has called for a pause in the use of AI in areas where human rights violations are most likely to occur, these are precisely the realms where advances seem to be moving fastest. Legal carve-outs are being created for sectors like national security and law enforcement, where human rights practitioners have long focused their energies, often to little avail.

“We’ve been voicing these concerns since well before generative AI,” Hicks said. “And we’ve yet to make progress on that front.”

Finally, Persily suggested that the growing problem of disinformation could lead people to not only believe in falsehoods but, of greater concern, disbelieve what’s true. “The pervasiveness of artificial content gives credibility to all those who want to divide reality,” he said. “The more we distrust the evidence before our eyes, the greater the power of liars to say what is and isn’t true.”

Thoughts on Regulation

Discussion of how to effectively and fairly regulate generative AI circled around a few central points:

  • The UN, through its B-Tech initiative, has created a framework by which technology companies can fold considerations of human rights into the work that they do. This initiative has recently taken up the specific case of generative AI. “The one cross-cutting set of laws we have in place to address these colossal challenges,” Türk said, “is the international human rights framework.”
  • A regulatory solution must be founded on broad participation. “One of the big concerns we have is that these conversations seem to be taking place with too much focus on the global North and English-language environments,” Hicks said. The potential problems, Donahoe seconded, are inherently transnational and, as such, solutions must be crafted inclusively.
  • As much transparency as possible must be built into these tools. It may never be possible to fully understand the function of a model—why its outputs are what they are—but certain checks could outline the model’s capabilities beforehand.
  • Discussion around policies and regulation must embrace nuance. Hicks suggested that most discussion of AI today is highly polarized: A product is either good or bad; it will destroy the world or save it; the private sector is the problem or government is the problem. “We have to find ways to engage in both conversations at the same time,” she said.

As the discussion came to a close, Donahoe asked each of the four panelists to answer whether they were, in general, more optimistic or pessimistic about a future with generative AI. The panelists were universally hopeful—but, one might say, reservedly so.

“I’m optimistic about the technology, but I would have to say I’m pessimistic about society right now” is how Persily put it, questioning our ability to effectively reach consensus about governing the many threats on the horizon. “If only AI had had this moment 30 years ago or if we could deal with it after first dealing with our social divisions.”

Read more on the event from the Center for Human Rights and International Justice, or watch the recording.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics