Skip to main content Skip to secondary navigation
Page Content

 

 

The AI & Human Rights Symposium was a full-day event with thought-provoking and challenging discussions of artificial intelligence and its relationship to human rights. The weekend gathering, sponsored by the Stanford Artificial Intelligence and Law Society (SAILS) and the Stanford Institute for Human-Centered Artificial Intelligence (HAI), brought together 22 speakers. Participants were activists from prominent civil society organizations including the ACLU, Human Rights Watch, Electronic Frontier Foundation and Partnership on AI, as well as active-duty military officers, representatives of Google, Facebook, lawyers, researchers and academics.

The panelists and an audience of about 150 sought insight into the profound social and legal issues raised by the rapid progress of artificial intelligence and machine learning. The major themes of the day included questions such as: Who will control “killer robots;”; Does AI complement or detract from free speech and privacy; How can we prevent algorithms from making decisions that lead to discrimination?

The event sought to shed light on the human rights impacts of AI, and advocated for taking a human rights–based approach to governing AI. Keynote speaker Alexa Koenig, Executive Director of the Human Rights Center at the University of California, Berkeley, and winner of the 2015 McArthur Award for Creative and Innovative Institutions, highlighted the benefits of looking at how AI impacts people through the lens of international human rights when developing, using and deploying this technology.

Algorithms and diversity

There’s been a good deal written about algorithmic bias and the use of flawed or limited data to build decision-making algorithms, which was a concern echoed across multiple discussions at the symposium.

While at least part of the solution lies in more robust and inclusive data sets, the problem extends beyond the technical realm, said Mehran Sahami, a professor of computer science at Stanford. “AI casts a light on us. Perhaps we should pause and consider our values,” he said.

When healthcare–related algorithms are built with data that is not inclusive of minorities, “my fear is that we will create a system of healthcare apartheid,” said Sonoo Thadaney-Israni, co-founder and executive director of Presence, Stanford Medicine Center.

“If we choose to measure that which is easy to measure — or easier to measure — and not think through the unintended consequences and struggle with measuring messier things, we can come up with very clean, beautiful algorithmic solutions, but we run the risk of exacerbating the equity issues,” Thadaney-Israni and a colleague wrote in an essay published last year in the JAMA Network.  

One solution to the problem of biased algorithms may lie in the use of multi-disciplinary development teams, said Jamila Smith-Loud, a user researcher on Google’s Trust and Safety team. A health insurance algorithm, for example, might not take into account the particular needs of African Americans, and thus deprive them of a standard of care enjoyed by the white community, Smith-Loud said. But a public health specialist with knowledge of minority communities would add information to make the algorithm more inclusive and effective, she argues.

Similarly, Koenig argues that engineering teams should include psychologists and historians. “Diversity will be a strength,” she said. And Dunstan Allison-Hope, managing director of Business for Social Responsibility, said: “Innovative methods of human rights due diligence are needed (to) uncover blind spots, imagine unintended consequences and anticipate a highly uncertain future.”

A major concern raised by panelists Prof. Mehran Sahami from Stanford, Nicole Ozer, Technology and Civil Liberties Director at the ACLU of California, and Peter Eckersley, Director of Research at Partnership on AI, was the risk of discrimination when using algorithmic-driven risk assessment tools for pretrial detention. Speakers stressed that due to historic, systemic and institutional racism, data used to train algorithms is inherently biased. Risk assessment instruments, which depend on this data and function through algorithmic design, can thus disportionately affect minorities. Eckersley gave an overview of recent efforts led by activists and researchers to prevent or mitigate risks of discrimination, including the Pretrial Risk Assessment Tools Factsheet Project at Stanford University.

Hate speech, free speech, and AI

Four years ago, a blogger posted a video containing nothing but ten hours of white noise on his YouTube channel. That would seem harmless, if inane, but YouTube’s Content ID system flagged the static as a copyright violation. Five claims for compensation were eventually filed against the poster, recalled Jeremy Gillula, director of technology projects for the Electronic Frontier Foundation.

The incident showed that algorithms used to censor speech and identify dangerous, harmful or illegal content lack subtlety and and may be ill-equipped for the job, he said. “People want to do something, anything at all,” to stop the flood of pernicious content on social media. But so far, many efforts to cope with the issue have been ill-considered —  and in some cases destructive, Gillula said. “Instead of censoring content, we should give users much more control over what content they see,” he said.  

Andy O’Connell, Facebook’s head of content distribution and algorithm policy, said the social media giant has already given users more control, is scrubbing enormous amounts of destructive content from its platform and is working to make its policies and decision more transparent.  As Katie Joseff from the Digital Intelligence Lab at Institute for the Future says, the question is not banning “bots” from platforms, but how “collective intelligence” can lead to “good decision-making and high-quality information sharing”.

Facebook is now issuing periodic “Transparency Reports,” one that discloses requests (and orders) from governments around the world to remove content, and a separate “Community Standards Enforcement Report” that shows how much “violating content” has been detected on the service.

Two million pieces of hate speech were removed in one recent quarter alone, O’Connell said. Facebook has added features to its interface that give users more control over what they see and explain why posts appear in their newsfeeds. Facebook expects to establish an “External Oversight Board,” in addition to its thousands of human content reviewers. The board would function as a court of appeals to review decisions about the appropriateness of content.

Content moderation policies on large platforms — the digital equivalent of a “town square” — raise significant issues around free speech under the first amendment in the US and international human rights norms relating to free expression.  As Angèle Christin from the Stanford Department of Communication noted, the question of where to draw the line for private parties moderating public speech is an incredibly tough problem for platforms to solve.  As Katie Joseff said, even in the context of terrorist and extremist content, censorship can be problematic — the safety of the community needs to be considered, as does avoiding “an immediate response out of fear,” which may further marginalize communities and prevent them from returning to the societal fold.

Privacy and the power of AI

AI, says Gillula, has the power to do what an analyst used to do — only faster and better. But when teamed with other powerful technologies, there is cause for concern.

Body cameras worn by police have been shown to decrease shooting by police officers. However, when combined with facial recognition and artificial recognition, they pose a threat to privacy, said Ozer.

The ACLU, she said, has worried for years that technology would eventually enable widespread surveillance by government agencies, including law enforcement. “What we feared is now possible,” she said, citing the increased use of surveillance technology since the 9/11 terrorist attacks.

Roughly half of the states have added their vast stores of driver’s license photos to an FBI database. Although those pictures are not accessible to the public or for sale, Facebook has stored billions of user photos that are easily accessible to anyone — unless a user has marked them as private.

The ACLU has called for a moratorium on the use of facial recognition by law enforcement, Ozer said. In mid-May, San Francisco banned the use of facial recognition software by the police. The Massachusetts legislature is currently considering legislation that would order such a moratorium and would make evidence obtained via facial recognition inadmissible in court. And employees at Amazon, along with a group of investors, have petitioned to stop the sale of the technology to the government.

AI goes to war

The panel on the use of AI in warfare gathered two experts working for the U.S military, an activist, a human rights lawyer and a former engineer from Google who left the company in protest over the company’s involvement in work for the U.S. Government.

AI has a potential to revolutionize warfare in alarming ways if used in fully-autonomous weapon systems that would be able to select and engage a target without meaningful human control, according to Bonnie Docherty, researcher at the Human Rights Watch lecturer on law at Harvard Law.

Are killer robots about to become reality in coming years? Not exactly, but The Department of Defense is currently exploring what it means to operationalize AI for defense purposes, running hundreds of small projects designed to explore the capabilities of artificial intelligence and its potential use in combat, said Lt. Col. Joseph Larson, deputy chief of the Pentagon’s algorithmic warfare team . The key lesson from that is that AI and ML technologies, and lack of explainability and predictability, require a different implementation paradigm than perhaps any other technologies that the DOD has been faced with.  The behavior of autonomous combat systems isn’t yet predictable enough for commanders to rely on. Although it depends on how the technology is used and in what situation, further testing and evaluating for explainability is required before they can be operationalized in combat, and for that industry engagement is critical. The law of war requires that combats distinguish between a civilian and a military combats, and you can never target the civilian, said Capt. Robert Lawless, a professor of law at West Point. If a weapon system cannot distinguish between those objects, it violates those principals.

The military is firm in its position that humans judgement must be involved when using autonomous weapons to the select and engage targets  — there’s a DoD directive to that effect — but AI creates challenges that are not yet dealt with in the DoD directives, Larson said. However, even though there is an agreement on the need of a human in the loop, how much human control is adequate or meaningful is still debated, according to Bonnie Doherty.

Larson is concerned that the best players in artificial intelligence will refuse to work on military-related projects, communicating the message that the DoD is interested in partnering with professionals or companies to work within their own preferred ethical limits or framework. In any case, those systems will be further developed in accordance with the current legal framework.

AI as a person?

While science fiction has portrayed truly autonomous robots and androids for many years – think of Star Trek’s Data – the goal of artificial general intelligence won’t be reached for decades, if ever, said Jessica Fjeld, assistant director of the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society.

Humans can and do collaborate with AI systems to make art, “but there are no AI artists,” she said.  Dr Jerry Kaplan of the Freeman Spogli Institute for International Studies agreed, saying that personification of a machine, in the foreseeable future, is “foolish” because even sophisticated AI systems do not have the required characteristics of personhood.

But, as Asst. Professor Mark Crimmins from the Stanford Psychology Department noted, asking questions about whether a machine could ever have the characteristics of a “person” may have practical real world implications for how we treat computerized systems and could also improve how we understand concepts like personhood in other contexts. Jessica Fjeld agreed, stating that asking about machine personhood may be most useful in a “navel-gazing” sense, allowing us to ask questions about what it is that truly distinguishes us from a machine.

As Yoon Chae, a Senior Associate at Baker and McKenzie noted, we do not have to confer philosophical personhood on an AI system to start thinking about whether there are certain legal rights and legal obligations that may apply in certain cases. Jessica Fjeld observed that we confer legal personhood on corporations and nation states, and it may be that we would want to do that for an AI system – though she cautioned that “offloading” of liability onto machines may benefit only the corporations that make those machines, and not society in general.

Despite the promises and benefits of AI expressed at the symposium, there was a general sense of caution about the future of AI and its impacts on human rights.

More News Topics