White Paper
Exploring the Impact of AI on Black Americans: Considerations for the Congressional Black Caucus’s Policy Initiatives
Nina Dewi Toft Djanegara, Daniel Zhang, Haifa Badi Uz Zaman, Caroline Meinhardt, Gelyn Watkins, Ezinne Nwankwo, Russell Wald, Rohini Kosoglu, Sanmi Koyejo, Michele Elam
This white paper, produced in collaboration with Black in AI, presents considerations for the Congressional Black Caucus’s policy initiatives by highlighting where AI holds the potential to deepen racial inequalities and where it can benefit Black communities.
Introduction
The Congressional Black Caucus (CBC) has a crucial role to play in the age of artificial intelligence (AI). We understand AI as any computational system that attempts to mimic human intelligence, performing tasks that require learning, reasoning, problem- solving, and decision-making. AI is one of the defining challenges of our time, a technology that holds tremendous promise, but also raises profound questions about our values and our future. While the CBC’s policy agenda remains unchanged in the face of the rapid proliferation of AI systems, we believe that it will be crucial for the CBC to apply a new lens to each of its policy focus areas that considers the opportunities and risks of AI development. We hope this paper will serve as a useful resource to help the CBC ground its policy agenda in the context of recent AI developments and their implications for Black Americans.
Sound AI policy must be anchored in a comprehensive and holistic approach that considers the potential for racial biases at every stage of AI development. This includes determining which social problems can be meaningfully addressed by AI, and which decisions are too sensitive to hand over to an algorithm. With this white paper, we also aim to help the CBC develop a thoughtful, forward-looking AI policy strategy that ensures the benefits of this technology are widely shared and its risks are carefully managed.
The Myth of Tech Neutrality
Technology is never neutral. It reflects and reinforces the values of those who develop it. However, we believe that technology is more than just a container for existing social biases; it is also a tool that can actively contribute to or exacerbate racism. This insight is grounded in the work of scholars like Dorothy Roberts, who documented how scientists have reinforced and redefined common-sense understandings of race throughout history, and Simone Browne, who outlined how surveillance technologies emerged out of the desire to monitor and control Black bodies. Like other technologies that came before it, AI is imbued with social and political values, including biases around race. For example, AI systems have been shown to perpetuate and amplify racial discrimination in employment, housing, and criminal justice. In particular, overreliance on algorithms to make sensitive decisions about loans or hiring can exclude people from financial services or accessing other opportunities—a process known as “algorithmic redlining.” A compounding factor is that among the people who research, develop, and invest in such AI systems, relatively few are Black. These examples demonstrate the need for a critical and intentional approach to the design and application of technology, one that prioritizes equity, justice, and human dignity.
While AI holds the potential to deepen racial inequalities, it can also benefit Black communities. If deployed carefully, AI has the power to improve access to healthcare and education, as well as create new economic opportunities. For example, AI can help doctors make more accurate diagnoses and provide personalized treatment plans, particularly in underserved communities where access to healthcare is limited. AI can also assist educators in tailoring lessons to individual student needs, increasing the chances of academic success for all students, including those from low-income and minority communities. Additionally, AI has the potential to redress systemic biases in banking and financial services, promoting greater access to economic opportunities for Black Americans. Our vision for human-centered AI is rooted in the belief that AI should be assistive, augmenting, and complementing human capabilities but never replacing human judgment. We write this white paper with the conviction that the CBC has more to contribute to AI policy than simply correcting racial biases. Instead, it can help steer AI to ensure the well-being and prosperity of Black communities.
How Do Computers See Race
AI tends to see race in restrictive, oversimplified ways that can reinforce racial stereotypes and color lines and/or lead to the mis-categorization of people. AI models conceptualize race in terms of neatly defined and fixed categories, oftentimes relying on the five racial types used by the U.S. Census Bureau. However, racial categories are not clearly delineated or a priori biological types. The Census Bureau’s racial classification practices, for example, have historically been informed by political and ideological needs and interests.
The racial categorization imposed by our data collection methods and adopted by AI models also fails to appreciate the cultural and social components of race and how it intersects with other identities, such as gender, class, and sexuality. Many people’s social identities resist easy categorization. Consider the difficulty people who are mixed-race or genderqueer will have placing themselves in a single box. As Michele Elam argues, racial categorization based on fixed, static, programmable data points misrepresents—and in some cases misdirects attention from—the important social and political dimensions to racial formation, which go far beyond skin color and physiognomy.
Yet, it is difficult to overcome this limitation of AI because narrow, unidimensional understandings of race are integral to the technology itself. Computer scientists hoping to produce fairer AI systems tend to concentrate their efforts on the model training stage, during which AI can inherit racial biases from historical datasets, operating on the belief that better data can resolve the problem of AI bias. As many research has highlighted, racial biases can enter AI at various stages of the technology development life cycle, from problem-setting to deployment. However, the problems at hand extend beyond technical bias or bad data and cannot simply be resolved by diversifying the workforce of computer scientists. To fully grasp the impacts of AI on marginalized communities, it is imperative to recognize how AI models understand and infer race.
Structure of the White Paper
In this paper, we explain recent developments in artificial intelligence that we believe are most relevant for the CBC. First, we discuss the rapid evolution of generative AI models, a breakthrough technology which is finding applications across sectors. Then, we turn to healthcare and education and outline how these sectors are being transformed by AI. Ultimately, this white paper is intended as an educational document, laying out the relevant issues and debates, rather than a set of definitive policy recommendations. It remains the task of policymakers to determine what kinds of regulation will be required to ensure that the significant promises of AI can be realized.
While issues like algorithmically enabled policing and surveillance are important concerns for Black Americans, these topics have been well-documented by other researchers and journalists. Our intent in this white paper is to share information about sectors that complement and potentially expand the CBC’s policy platform and are less commonly invoked when talking about race and AI. In each section, we explain what AI is currently capable of and where it is being used, and then explore the promises and perils of AI in the near future. This guidance will help the CBC take proactive steps to ensure that AI technology is developed and applied in ways that protect civil rights and promote racial justice. Finally, while this paper was inspired by conversations with CBC staff, the insights it puts forward are broadly applicable to other groups that could be marginalized by AI.
Read the full White Paper View all Policy Publications