Skip to main content Skip to secondary navigation
Page Content
Image
Young black married couple looking at computer screen, analyzing information

Algorithms do not treat everyone equally. Black Americans are more likely to be denied loans, jobs, and more when using some AI systems.

Earlier this year, a class action lawsuit was filed against the Navy Federal Credit Union, the nation’s largest, alleging that it “systematically and intentionally discriminates against minority borrowers across the United States.” The lawsuit claims that 52% of Black borrowers were denied loans, compared with just 23% of white borrowers. This phenomenon, often referred to as “Banking While Black,” is something civil rights activists have been asserting for years, and is a concern that has not been adequately addressed by the algorithmic decision-making systems that many of these financial institutions have implemented to screen loan applicants. In fact, studies point to a loan rejection likelihood of 80% for Black Americans when using these AI systems. 

With AI regulation top-of-mind for our elected representatives, Stanford Institute for Human-Centered AI affiliates recently published a timely and important white paper titled “Exploring the Impact of AI on Black Americans.” 

Here, authors Sanmi Koyejo, assistant professor of computer science at Stanford and president of the Black in AI organization, and Rohini Kosoglu, HAI policy fellow, discuss the white paper, presented to the Congressional Black Caucus (CBC), and how AI has the potential to both benefit Black communities and deepen racial inequality. 

What are your goals with this white paper, and how are you hoping it will inform and impact the CBC?

Kosoglu: We have policymakers who are trying to coordinate a government approach to AI, so we need to make sure they include language around civil rights and an acknowledgment of people’s rights and opportunities. It’s important that this language is not only provided to but also infused in both intent and spirit throughout each agency. What this paper strives to do is address how this would work in practice for Black Americans. The CBC lawmakers have a long history of leadership, and this paper is important because it shows that we—Stanford and Black in AI—are here to help inform, educate, and raise issues that everyone needs to think about. This is a helpful tool for lawmakers, government agencies, and even state governments. The hope is we can educate people across the diaspora and the ecosystem with this. 

What has the discussion around AI looked like in Congress up to this point? 

Koyejo: In my view, most of the existing discussion has focused on national security. What we’re asserting is that we need a more balanced discussion—AI affects people now. It is not just security risks in the future. 

Kosoglu: When we think of safety, we think about it broadly in terms of safety of the models, but what we want is to think about safety in terms of communities and how these tools are distributed. Safety isn’t solely an issue of “Is this model safe or not?” It should have a broader definition to include the people who use it and the different risk levels for those people. We also can’t have this conversation about innovation without equally talking about access. 

What issues does this paper raise around how these models infer race?

Koyejo: There is this idea that tech feels neutral because it has mathematical and logical backing, but we want to remind people that math often has a political perspective when applied in the real world. Neutrality is often hypothetical. One way that this happens is through data and algorithms, which distill the world into something simpler—because of the way data is collected, people get reduced to numbers, which can depart significantly from people’s lived experiences. Our argument is that once you do this data compression and use an AI model, you often miss this nuance, which leads to different impacts and differences in opportunity in certain AI verticals, such as generative AI, healthcare, education, and increasingly leads to significant impacts on the environment.

How can Black Americans be impacted in these verticals? Could you give us some examples?

Koyejo: There’s always a balance of risk and opportunity, and we wanted to highlight both. In education, for example, we have an opportunity for AI-enabled devices to bridge achievement gaps with tailored lesson plans and assignments, but these same devices could employ exploitative data collection practices on underserved students. Generative AI, for example, could provide a platform for the creative expression of Black creators, allowing them to create content without the need for expensive software, but there is risk in the way the models scrape up art and writing which exploits creators. Current copyright laws are unclear on the ownership of creative expression.  

Kosoglu: In healthcare, we know that models may have access to data that is limited in terms of community-specific experiences as well as geographic locations. As we see generative AI models advance, we need to openly discuss limited data around the lived experience for Black Americans.

How about the environmental impact of AI, and the potential for an outsized impact on Black communities. Could you share those insights? 

Koyejo: The resources we need to build AI tools have exploded. We’re at a point now where there are serious discussions around creating an entirely new power infrastructure for next generation AI. For example, will we need a new power plant to train and deploy GPT-6? I say this as a way to emphasize that the resource needs are outsized, and [in the white paper] we tie this to existing scholarship in the field that says that in power and energy—where they are being built, the resources required to build them, raw material needs—the repeated experience is that the extraction of resources is largely from marginalized communities, and they are the ones who are routinely exploited. We call this out as a threat to these communities and something that needs to be paid attention to by policymakers. 

Kosoglu: Policymakers have been focused on environmental justice in so many ways, but what Sanmi brings up is often an afterthought. There is not really that much conversation around the actual impact to communities around these resource needs, and that needs to change. 

What are you hoping the CBC, policymakers, and advocates will take away from this white paper?

Kosoglu: The educational piece is crucial. How do we best explain and educate people who can help address these issues? Tech leaders are focused on innovation, and they pay lip service to the risks, but that’s about it. There’s no deep discussion on the harms. As someone who comes from the public sector, I care very deeply about these harms and risks. We’re not trying to purposefully slow down innovation, but it’s not fair to say we should just accept that there will be harms and end the discussion. That's not acceptable. 

Koyejo: There’s also this sense that AI feels foreign and magical and beyond reach, and people don’t feel empowered to self-advocate because the AI is this special thing that comes down from somewhere up high. We need to bring awareness that real people are building these tools, that their decisions are meaningful and impactful, and their decisions end up being the ethos by which these tools are built. People don't feel empowered to talk about it because it’s beyond their expertise, but our goal is to educate and empower because self-advocacy can make a big difference. 

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more