Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Beyond Benchmarks | Building a Science of AI Measurement
This workshop will highlight the significant impact of AI applications in the Department of Energy (DOE) science by showcasing SLAC's research program, which includes national-scale science facilities such as particle accelerators, x-ray lasers, and the Rubin Observatory.
This workshop will highlight the significant impact of AI applications in the Department of Energy (DOE) science by showcasing SLAC's research program, which includes national-scale science facilities such as particle accelerators, x-ray lasers, and the Rubin Observatory.
In this workshop, we will explore a suite of interactive tools, including the ChucK music programming language, ChAI, the Pandora audiovisual live coding environment, and Wekinator.
In this workshop, we will explore a suite of interactive tools, including the ChucK music programming language, ChAI, the Pandora audiovisual live coding environment, and Wekinator.
Using the same machine learning model for high-stakes decisions in many settings amplifies the strengths, weaknesses, biases, and idiosyncrasies of the original model. When the same person re-encounters the same model again and again, or models trained on the same dataset, she might be wrongly rejected again and again. Thus algorithmic monoculture could lead to consistent ill-treatment of individual people by homogenizing the decision outcomes they experience. This talk will formalize the measure of outcome homogenization, describe experiments on US census data that demonstrate that the sharing of training data consistently homogenizes outcomes, then present an ethical argument for why and in what circumstances outcome homogenization is wrong.
HAI Network Affiliate; Assistant Professor of Philosophy and Computer Science, Northeastern University
No tweets available.