Algorithmic fairness and privacy issues are increasingly drawing both policymakers’ and the public’s attention amid rapid advances in artificial intelligence (AI). But safeguarding privacy and addressing algorithmic bias can pose a less recognized trade-off. Data minimization, while beneficial for privacy, has simultaneously made it legally, technically, and bureaucratically difficult to acquire demographic information necessary to conduct equity assessments. In this brief, we document this tension by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. We propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.
Algorithmic fairness and privacy issues are increasingly drawing both policymakers’ and the public’s attention amid rapid advances in artificial intelligence (AI). But safeguarding privacy and addressing algorithmic bias can pose a less recognized trade-off. Data minimization, while beneficial for privacy, has simultaneously made it legally, technically, and bureaucratically difficult to acquire demographic information necessary to conduct equity assessments. In this brief, we document this tension by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. We propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.
Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.
Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.
Stanford HAI co-directors were among the first to issue a call for the U.S. government to create a National AI Research Resource.
Stanford HAI co-directors were among the first to issue a call for the U.S. government to create a National AI Research Resource.
Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.
Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.
Researchers say AI risk shouldn’t be siloed from other technological risk and propose updating existing frameworks rather than inventing new ones.
Researchers say AI risk shouldn’t be siloed from other technological risk and propose updating existing frameworks rather than inventing new ones.
To lessen internet platforms’ power over democratic political debate, a group of Stanford researchers are advocating for a competitive market in middleware.
To lessen internet platforms’ power over democratic political debate, a group of Stanford researchers are advocating for a competitive market in middleware.