HAI Weekly Seminar with Kathleen Creel
The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems
Automated decision-making systems implemented in public life are typically highly standardized. One algorithmic decision-making system can replace or influence thousands of human deciders. Each of the humans so replaced had their own decision-making criteria: some good, some bad, and some merely arbitrary. Decision-making based on arbitrary criteria is legal in some contexts, such as employment, and not in others, such as criminal sentencing. Where no other right provides a guarantee of non-arbitrary decision-making, is arbitrariness of moral concern?
An isolated arbitrary decision need not morally wrong the individual whom it misclassifies. However, if the same algorithms produced by the same companies are uniformly applied across wide swathes of a public sphere, be that hiring or lending, the same people could be consistently excluded from employment, loans, or other sectors of civil society. This harm persists even when the automated decision-making systems are “fair” on standard metrics of fairness. We argue that arbitrariness at scale is morally and should be legally problematic. The heart of this moral issue relates to domination and a lack of sufficient opportunity for autonomy. It relates in interesting ways to the moral wrong of discrimination. We propose technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harm we identify.
HAI-EIS Embedded EthiCS Fellow