Skip to main content Skip to secondary navigation
Page Content

Note: The following was adapted from remarks to the Bay Area Robotics Symposium (BARS) on November 9, 2018.

California is widely recognized as the epicenter of robotics and AI, but it’s also the largest judicial system in America. In California judges preside over many of the disputes that define the modern world, from privacy cases to disagreements about the kinds of technologies on which governments or private employers can rely to make decisions about whether someone is drunk or whether they’re acting within the scope of employment. We also serve a population of staggering size and diversity: 40 million people in total, with almost 7 million limited English proficient residents speaking about 210 languages. In fact, as I write this, a pilot project is evaluating the use of digital translation devices at courthouse counters. And there’s little doubt that even greater changes are soon to follow –– some of them no doubt technological developments that could make it easier for members of the public to achieve access to justice.

What I want to reflect on is why some of those changes –– particularly involving society’s reliance on advanced computation and artificial intelligence to make or support important decisions –– will raise difficult questions for society. As a Justice of the California Supreme Court with a lifelong interest in technologies like robotics and AI, I increasingly see an overlap between the concerns at the heart of many disputes about law and public policy, and the issues raised by reliance on artificial intelligence to inform law enforcement, security, hiring, or health decisions. In both law and the study of applied artificial intelligence, questions arise about how people or society can make reasonable choices in an environment of limited information. Both fields strive to develop systems for decision making that can revise prior assumptions as new information becomes available, and—perhaps most profoundly—both can define or redefine what it means to be rational (or at least reasonable).

But differences also exist. Policy programs and statutes don’t make persuasive appeals on their own. Despite the enormous technical challenges we currently face in natural language processing, machines will increasingly be able to leverage sophisticated data, enormous computing power, and clever algorithms to optimize their persuasive potential. Even leaving aside the possibility of a breakthrough that makes artificial general intelligence feasible, our future will likely expose us to machines that can act in ways that appear increasingly intelligent to the public. When combined with a variety of existing sources of instability –– from geopolitical tensions to systemic financial risk to greater decentralization of biomedical innovation –– the ubiquity of machines that act as though they are intelligent in some meaningful sense may exacerbate political, regulatory and institutional disruptions at a global scale. And because of discontinuities not only in technological development but societal norms about how we use technology, our time to explore these issues is potentially quite limited. What follows are four key ideas worth bearing in mind as we consider how to navigate the risks and possibilities. Together, these ideas help explain why a major difficulty in building an “AI” infrastructure incorporating society’s legal and ethical commitments is not only agreeing on the content of those commitments, but deciding how to reconcile those commitments in tension with each other.

1. The assumptions we make about organizations

First, most of us make decisions in collaboration with algorithms and data, from hospitals to government agencies, and we should expect this relationship to expand significantly in the coming decades. But these tools reflect the assumptions we make about the organizations they support, and it’s worth scrutinizing them. We assume such organizations learn from their mistakes, for example, or that they provide minimally adequate cybersecurity. As artificial intelligence technology develops and becomes more ubiquitous, the scope and consequence of these assumptions will grow. Uses of artificial intelligence in domains ranging from health care to industrial safety may be most defensible if one assumes that an organization will learn from its mistakes, or adequately incentivize its employees to avoid over-delegating decisions to machines that may lack the ability to understand the relevance of context. How will we define success in terms that can be codified and implemented by machines? And what kind of organizational assumptions—about competence, adaptation, leadership, or efficiency—will we build into that definition? As we evaluate how the use of AI will affect human welfare, we should consider what organizational assumptions –– about competence, adaptation, leadership, efficiency, or whatever –– are built into any technically-oriented definition of success we want to apply to a particular AI system or robotics technology.

2. Law and regulation in the age of AI

Next, it’s important to remember that we already regulate robots and AI, however imperfectly. Whether it’s tort law, contract law, intellectual property law or consumer protection, extensive legal frameworks exist to adjudicate questions of responsibility and ownership, even in cases involving autonomous technology. In public sector decision making, we rely on constitutional law and administrative procedure to constrain an agency’s discretion. These laws still apply even if a person or organization makes a procurement, personnel, financial, or security-related decision with the assistance of an AI system. Nevertheless, we’re bound to face increasingly complex dilemmas in applying these laws and, in some cases, reforming them.

Tort law, for example, helps us determine responsibility in the event of an injury, but we will soon face cases in which AI and robots are involved as well. As the use of AI becomes more common in fields like medical care, perhaps even part of what helps meet a standard of “reasonable care.” Yet lawyers and judges applying tort law may be forced to recognize that sometimes organizations have good reasons for keeping humans in the loop –– to continue accumulating knowledge about human performance in addressing emerging medical problems that can be used to benchmark machines’ performance, for example, or to avoid an erosion of organizational knowledge that can leave an organization brittle and vulnerable in the event of a cyberattack or a technological glitch.

Of course, the fact that law often already regulates artificial intelligence by regulating the conduct of humans who use the technology doesn’t mean that existing legal arrangements are optimal. Nonetheless, any discussion about how law should respond to the challenges posed by artificial intelligence should start from the premise that it would be difficult to justify treating someone as essentially immune from whatever culpability they would otherwise have simply because he or she uses a robot or relies on an AI system. Similarly, the claim that the technology “moves too fast to regulate” is, at best, under-theorized –– particularly given the extent that law already routinely regulates, however imperfectly, reliance on artificial intelligence. The assertion that certain uses or design features of artificial intelligence should remain categorically beyond the law’s reach makes more sense must be explained as a defense of a specific legal change or arrangement; otherwise it becomes merely an appeal to cast aside vast tracts of existing law.

3. Our collective quest to build ethical systems

Third, those of us in the worlds of law and policy have something in common with robotics and AI: we both want to build systems that behave ethically. Yet it’s easy for –– even for lawyers –– to underestimate the difficulty of establishing a consensus on ethical behavior; in practice, vast gulfs can exist between what we do (for example), what we claim to want, and how we behave under pressure. Our values are often difficult to harmonize when society develops public policy or weighs the costs and benefits of particular choices. Even for individuals, values are often in flux, and easily modulated by emotions and circumstance.

Rather than trying to build such a consensus that settles all or even most value questions definitively, I suspect the most promising way forward is to build systems that can take into account an array of competing motivations and goals. Such systems could be designed to approach value conflicts less as a domain calling for a definitive set of principles to categorically settle conflicts, and more as a domain requiring dialogue and deliberation. Imagine, by analogy, the kind of vigorous debate often heard in a well-argued court case, or when a group of well-informed doctors disagree on a patient’s treatment. Such systems would be designed to track these debates as they unfold, applying a range of principles to assess the prospects for compromise between a competing set of ethical and policy priorities.

4. The status of increasingly intelligent machines

Finally, as I’ve discussed elsewhere, I believe we should expect contentious conversation about the relationships people will want to have with robots and AI systems. Considering our tendency to anthropomorphize machines of all kinds, and the depth of debates over issues like corporate personhood, we may soon find ourselves at a crossroads about whether the sophisticated systems around us should be treated as things, animals, persons, or something else.

Conclusion

The technologies under development here in California and elsewhere have the potential to bring about enormous benefits, but we can’t treat the questions raised by these developments like mere technical challenges or personal choices, nor can we assume the right answers will somehow emerge on their own. This will be a long-term, collective effort, and it will demand nothing less than a new, common language that engineers, policymakers, and lawyers can share. Whether we’re in the courts, in legislatures, or in labs, we all have a role to play in building a world of machines—and laws—that live up to our values.

More News Topics