HAI Weekly Seminar with Kathleen Creel
Picking on the Same Person: Does Algorithmic Monoculture Homogenize Outcomes?
Using the same machine learning model for high-stakes decisions in many settings amplifies the strengths, weaknesses, biases, and idiosyncrasies of the original model. When the same person re-encounters the same model again and again, or models trained on the same dataset, she might be wrongly rejected again and again. Thus algorithmic monoculture could lead to consistent ill-treatment of individual people by homogenizing the decision outcomes they experience. This talk will formalize the measure of outcome homogenization, describe experiments on US census data that demonstrate that the sharing of training data consistently homogenizes outcomes, then present an ethical argument for why and in what circumstances outcome homogenization is wrong.