HAI Weekly Seminar with Kathleen Creel | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

HAI Weekly Seminar with Kathleen Creel

Status
Past
Date
Wednesday, February 24, 2021 10:00 AM - 11:00 AM PST/PDT
Topics
Ethics, Equity, Inclusion

The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems

Automated decision-making systems implemented in public life are typically highly standardized. One algorithmic decision-making system can replace or influence thousands of human deciders. Each of the humans so replaced had their own decision-making criteria: some good, some bad, and some merely arbitrary. Decision-making based on arbitrary criteria is legal in some contexts, such as employment, and not in others, such as criminal sentencing. Where no other right provides a guarantee of non-arbitrary decision-making, is arbitrariness of moral concern?

An isolated arbitrary decision need not morally wrong the individual whom it misclassifies. However, if the same algorithms produced by the same companies are uniformly applied across wide swathes of a public sphere, be that hiring or lending, the same people could be consistently excluded from employment, loans, or other sectors of civil society. This harm persists even when the automated decision-making systems are “fair” on standard metrics of fairness.  We argue that arbitrariness at scale is morally and should be legally problematic. The heart of this moral issue relates to domination and a lack of sufficient opportunity for autonomy.  It relates in interesting ways to the moral wrong of discrimination. We propose technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harm we identify.  

Speaker
Kathleen Creel
HAI Network Affiliate; Assistant Professor of Philosophy and Computer Science, Northeastern University

Watch Event Recording

Share
Link copied to clipboard!
Event Contact
Celia Clark
celia.clark@stanford.edu
More from HAI and SDS seminars
  • Inside the 2026 AI Index Report | Stanford HAI
    SeminarMay 20, 202612:00 PM - 1:15 PM
    May
    20
    2026

    The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Related Events

Juan Sebastián Gómez-Cañón | Challenges And Opportunities For Human-Centered Music Emotion Recognition
SeminarJun 03, 202612:00 PM - 1:15 PM
June
03
2026

Music is intertwined with human emotion, memory, and identity, making it a powerful medium for affective experience and regulation.

Seminar

Juan Sebastián Gómez-Cañón | Challenges And Opportunities For Human-Centered Music Emotion Recognition

Jun 03, 202612:00 PM - 1:15 PM

Music is intertwined with human emotion, memory, and identity, making it a powerful medium for affective experience and regulation.

Arvind Narayanan | Adapting to the Transformation of Knowledge Work
May 18, 202612:00 PM - 1:00 PM
May
18
2026

The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.

Event

Arvind Narayanan | Adapting to the Transformation of Knowledge Work

May 18, 202612:00 PM - 1:00 PM

The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.

Inside the 2026 AI Index Report | Stanford HAI
SeminarMay 20, 202612:00 PM - 1:15 PM
May
20
2026

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Seminar

Inside the 2026 AI Index Report | Stanford HAI

May 20, 202612:00 PM - 1:15 PM

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.