HAI Policy Briefs
The Privacy-Bias Trade-Off
Algorithmic fairness and privacy issues are increasingly drawing both policymakers’ and the public’s attention amid rapid advances in artificial intelligence (AI). But safeguarding privacy and addressing algorithmic bias can pose a less recognized trade-off. Data minimization, while beneficial for privacy, has simultaneously made it legally, technically, and bureaucratically difficult to acquire demographic information necessary to conduct equity assessments. In this brief, we document this tension by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. We propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.
➜ As companies and regulators step up efforts to protect individuals’ information privacy, a common privacy principle (data minimization) can come to clash with algorithmic fairness.
➜ The U.S. federal government provides a compelling case study: Its adoption of data minimization in the Privacy Act of 1974 has brought many privacy benefits but stymies efforts to gather demographic data to assess disparities in program outcomes across federal agencies.
➜ Coupled with procedures under the Paperwork Reduction Act of 1980, the Privacy Act has meant that agencies rarely and inconsistently collect data on protected attributes.
➜ Twenty-one of 25 agencies noted substantial data challenges in responding to an executive order requiring agencies to conduct equity assessments of their programs.
➜ Privacy principles should be harmonized to permit secure collection of demographic data to conduct disparity assessments.