Tanner Lecture: AI and Human Values with Seth Lazar
Gates Computer Science Building, 353 Jane Stanford Way Stanford, CA 94305
The Tanner Lectures were established by the late American scholar, industrialist and philanthropist Obert Clark Tanner. The purpose of the lectures is to advance and reflect upon scholarly and scientific learning relating to human values. This intention embraces the entire range of values pertinent to the human condition, interest, behavior and aspiration. Stanford is proud to be one of the nine distinguished universities to host the Tanner Lectures. The Tanner lectureships, which are comprised of annual lectures and seminars, are held at Cambridge, Harvard, Michigan, Oxford, Princeton, Yale, Stanford, the University of California and the University of Utah.
The 2023 Tanner Lecture will be given by Seth Lazar, Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI.
This event is co-hosted by The McCoy Family Center for Ethics in Society.
A century ago, John Dewey observed that '[s]team and electricity have done more to alter the conditions under which [people] associate together than all the agencies which affected human relationships before our time'. In the last few decades, computing technologies have had a similar effect. Political philosophy's central task is to help us decide how to live together by analysing our social relations, diagnosing their failings, and articulating ideals to guide their revision. But these profound social changes have left scarcely a dent in the model of social relations that analytical political philosophers assume. These lectures make a start at fixing that mistake. Lecture 1 introduces the theoretical resources necessary for this project; Lecture 2 applies those resources to the case of communication in the digital public sphere.
Discussant: Marion Fourcade, Professor, Department of Sociology, University of California-Berkeley
Lecture 1 argues that we are increasingly connected to one another by algorithmic intermediaries—sociotechnical systems such as centralised privately- and publicly-controlled digital platforms and competing decentralised architectures. I call this network of algorithmically-mediated social relations the 'Algorithmic City'. I analyse the intermediary power that governs the Algorithmic City, and contrast it with the extrinsic power exemplified by the state in the physical city. Extrinsic power governs social relations the way a river's banks govern the water; intermediary power operates more like the bonds holding the water molecules together. By constituting the relationships that they mediate, algorithmic intermediaries enable some to exercise power over others, to shape power relations between mediatees, and—over time—to reshape society at large. Sometimes new power relations should simply be eliminated, but algorithmic intermediaries, if governed appropriately, could be crucial to realising egalitarian social relations and collective self-determination in the information age. We must therefore determine whether and how algorithmic intermediary power can be exercised permissibly. I introduce a framework for justifying this power, and show how algorithmic governance raises new challenges for political philosophy concerning the justification of authority, the foundations of procedural legitimacy, and the possibility of justificatory neutrality.
Discussant: Arvind Narayanan, Professor of Computer Science, Princeton
At the centre of the Algorithmic City is the digital public sphere. Its pathologies are by now well-known—from misinformation to affective polarisation; radicalisation to bots; hate speech to astroturfing. Scholars, technologists and regulators have advanced many interventions aimed at curing these ailments. Less time has been spent articulating just what we value in the digital public sphere, and how the nature of our goals condition the means that can be used to achieve them. Political philosophy has historically given free expression this role, arguing that a healthy public sphere just is what emerges spontaneously from the provision of background conditions such as (adequately resourced) rights to free speech. But this recipe is now self-evidently inadequate: overwrought protections for expression are themselves to blame for the internet's pathologies. In this lecture, I argue that in governing the digital public sphere, algorithmic intermediaries shape not only expression but communication. Rather than only defending extrinsic parameters for permissible speech, they must also exercise intermediary power in choosing what to amplify and what to reduce, what kinds of communication to enable and encourage, and what to disable or frustrate. In redesigning platform architectures and recommender systems to shape communication and allocate attention in the digital public sphere, we should be guided by a theory of communicative justice. Using theoretical and normative resources set out in lecture 1, this lecture starts to build such a theory.
Faculty, Apple University; Distinguished Senior Fellow, University of California, Berkeley