Skip to main content Skip to secondary navigation
Page Content

The Movement to Decolonize AI: Centering Dignity Over Dependency

Stanford scholar Sabelo Mhlambi describes how AI has a colonizing impact on the world and the ways activists are aiming to change that.

Image
image of the earth with points of light throughout

"AI has historically been dominated by groups that reinforce the same biases that are already present in society. The goal is to move away from that toward capturing a more realistic view of the world." | iStock/oyyimzy

After colonized African countries gained independence during the mid- to late-20th century, foreign multinational corporations continued to dominate the fledgling nations’ economies through investment and resource extraction followed by resale of finished goods back to African nations. This process excluded Africans from the process of value creation and left them with economies built on unskilled labor. The new nations’ leaders had a name for this dominance: neo-colonialism.

This dominance continues unabated in the Global South today, says Sabelo Mhlambi, Practitioner Fellow at the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society (Stanford PACS). By engendering perpetual dependence on their products and internet infrastructures, global tech monopolies like Facebook, Amazon, and Google make it almost impossible for people to determine their own futures on their own terms, he says.

The dominance of outside corporations is further exacerbated by the rise of artificial intelligence (AI), which takes power and resources — this time in the form of data — away from marginalized communities and delivers profits to those who are already wealthy, Mhlambi says. Moreover, AI developed almost exclusively by large foreign companies can contain worldviews, ideals, ideological borders, and automated decisions that may not reflect a non-western way of thinking and being in the world, he says. This way of being is imposed through the spread of technology when other alternatives are difficult to come by.

In response to the growing power of tech monopolies and AI technology, a movement to decolonize AI has arisen across the Global South. Although the movement may take different forms in different cultures, Mhlambi and other scholars recently drafted the Decolonial AI Manyfesto, a statement intended to capture a broad range of ideas around allowing historically marginalized groups to “decide and build their own dignified socio-technical futures.”

Here, Mhlambi discusses what it would mean to decolonize AI.

How does the decolonial view of AI change the narrative around addressing harms caused by AI?

The decolonial position is that we need to look at and grapple with the structures and systems of power that originated from the colonial era and still exist today. Continents were impoverished so that others could become wealthier, and the resulting economic gaps are still playing out.

In the AI context, there seems to be consensus even at Stanford that AI produces some harms, and there’s a tendency among prominent AI practitioners and startups to think we can fix those harms by getting better data. But the decolonial view would say that even if a large, very precise dataset is available, that dataset is a digital representation of society as it is: The producers of the data have biases that are difficult to remove from the data, as do those who make decisions regarding what data to collect, how to collect it, and what to use it for. So having more data doesn’t solve anything.

Also, artificial intelligence systems don’t build and deploy themselves. They come into being through a set of decisions, starting with AI designers identifying a problem to solve, and funders deciding who gets the capital to build and deploy their AI systems.

If the skill of building AI is concentrated among a group of people — say, western-educated engineers — then their views are shaping which problems AI will be used to solve and which will not. And if the capital is also concentrated among a specific group of people — say, Silicon Valley venture capitalists — then their views are shaping which AI systems are built and deployed globally. And it all happens without considering the objectives of people from other nations or backgrounds who are nevertheless being impacted by the global reach of the tech monopolies. This is essentially economic colonialism in a new guise. So that process — defining problems to solve with AI and deciding which ones to fund and deploy — could and should be decolonized. 

Ultimately, to address the harms caused by AI we must do more than collect more and better data. We must consider how power has been taken from people who have been historically marginalized and continues to be taken by global corporations. And we need to ask: What can be done differently once the problem is reframed in this way?

Why does the decolonial movement address AI specifically rather than, say, digital technologies more generally?

AI’s power lies in the trifecta of scale, automation, and the belief system that says AI will yield the rational and correct outcome. Because AI can be deployed at scale to automate decision making, and because people today seem to trust decisions that we’ve delegated to machines, we end up yielding an awful lot of power to AI.

In every epoch, people orient their lives around a respected source of “truth.” In the distant past, it might have been the Greek oracles that would help people decide whether to go to war. And later, when the emphasis was on God and religion, people in Europe pursued colonization as following God’s will. Like manifest destiny or the Spanish Inquisition, religion was a way to justify the taking of resources from others: It’s what God has chosen us to do. And then when western philosophy shifted away from religion, rationality became a way to justify who should have what, who should be ruled, and how resources and power should be distributed.

And today, AI sort of plays the same role. When we assume AI does the right, correct, or logical thing we can point to it as a justification to hire or not hire someone, or to give or deny someone a loan. It almost frees us from responsibility. We’re giving that power away to the system.

And if we’re not careful, we can increasingly rely on AI to justify continued economic colonization just as we relied on Christianity or religion, perhaps in bad faith, in the name of manifest destiny.

In what sense is decolonizing AI a movement, and can you give some examples of what activists are doing in support of the movement?

In Latin America and Africa, almost simultaneously, there has been a sort of awakening around decolonizing AI. And people have become organized around this idea. Universities in the western world are also now staying on top of the decolonial AI movement by offering fellowships, hosting workshops, and publishing work. 

In Latin America, Tierra Común is a major hub for meetings around decolonizing data with the aim of gathering resources and promoting activists and thinkers to affect change. They meet once a month or so, they publish newsletters, and they plan workshops. 

In Africa there’s a group called Masakhane that has set a goal of decolonizing not only AI but science generally. They meet often and do their own AI research to create new AI systems by Africans for Africans. For example, they are working on developing natural language processing capabilities in multiple low-resourced East African languages. Their focus is on encouraging local people to be the ones creating their own finished products, as opposed to waiting for companies like Google or Microsoft to create AI systems for them. This year, they will be launching a fund to support local creators of AI systems. So, they have a very practical approach to decolonizing science and AI.

And people are also bridging the gap between the continents. The Latin American community is reaching out to the African community to form partnerships with the shared goal of decentering the West as the voice of reason about what technology should be developed and what its purpose should be. 

AI has historically been dominated by groups that reinforce the same biases that are already present in society. The goal is to move away from that toward capturing a more realistic view of the world that includes concerns that many in the Global South consider relevant and important.

Is the decolonial AI movement focused exclusively on marginalized communities in the Global South, or is there a way in which all of us, even privileged Silicon Valley residents, are being colonized by or dominated by this technology?

We’re not just talking about marginalized communities. There’s a Cameroonian philosopher named Achille Mbembe who says that the injustices experienced by Black people in the colonial era and the monopolistic extraction of resources and people from colonized nations are now the prototype of future oppressions globally. This is a framing that covers not just colonialism’s impact on historically marginalized groups, but the nature of monopolistic companies’ existence and reach.

Consider fingerprinting technology, which originated in South Africa for surveilling, monitoring, and mapping the Black population. That same technology was then further improved, eventually leading to eugenics and Nazi Germany. And now, we’re all dealing with the prospect of facial recognition being used to surveil all of us. 

And if we’re seeing patterns of AI colonialism happening in sub-Saharan Africa, then they’re also happening in Asia, Latin America, and the United States, whether it’s through our data being used in ways that impact personal privacy or through commodification of automated AI decision making with its potential harms. That’s just the nature of that system: It’s not enough to feed on the historically marginalized. 

And one thing this points out clearly for me is that colonization was never about race or domination per se. It was about capitalism. It was a financial decision. During the precolonial period, Europeans and Africans had mutual admiration for each other. But when the opportunity came to capitalize on resources extracted from the African continent, then racism arose to justify economic greed.

The big global monopolies are profit-motive driven, have too much influence on society, and aren’t responsible to anyone else besides themselves. And then because of their power, they change how we look at the world and how we experience the world, especially as we experience the world digitally. By taking a decolonial view, we see this. And perhaps it can be changed.

What can people do to counteract AI colonization? 

If we look at how countries decolonized in the last century, we see that the movements for independence deployed a variety of approaches: Political approaches, nonviolent activism, and in some cases violent struggle were all used. Similarly, a variety of approaches may be required as we think about how to decolonize AI. 

Are we suggesting changing a few rules here and there, or breaking up the corporations, or perhaps something more extreme like changing the entire economic system and the incentives that uphold the power that global companies have? Might one even imagine an extreme case where the African continent says, in essence, “We’re cutting off access to big non-African corporations and building our own ecosystems?” Is that one way forward? 

There’s a diversity of ideas that exists even within the decolonizing movement. And so then this idea emerges that we can have many worlds existing in one, as expressed in the Decolonial AI Manyfesto. We’re saying we have to address these outstanding problems in a way that is realistic, and in a way that is direct. And in doing so we can find our own ways.

Sabelosethu Mhlambi explains the movement to decolonize AI during a HAI Weekly Seminar:

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics