Skip to main content Skip to secondary navigation
Page Content

Neema Iyer: Digital Extractivism in Africa Mirrors Colonial Practices

The founder of Pollicy and fellow at Stanford PACS discusses the impacts of digital extractivism and strategies for mitigating its harms.

Illustration of a map of Africa with digital technologies overlaid


In the summer of 2020, popular Oromo musician and Ethiopian civil rights activist Hachalu Hundessa was assassinated in Addis Ababa by a nationalist group with a history of violent attacks. In the hours following Hundessa’s death, hate speech against the country’s ethnic and religious groups blanketed Facebook, inciting a wave of violence that took the lives of hundreds, and ultimately led the government to a full internet shutdown. 

What went wrong? As Facebook whistleblower Frances Haugen disclosed, employees had warned managers about widespread hate speech and the potential for violence in Ethiopia, telling leaders that “current mitigation strategies are not enough.” But at the time of Hundessa’s death, Facebook had not yet built an AI classifier to identify hate speech in the two most widely spoken languages in Ethiopia. Activists and human rights groups believe that this platform-level failing bears some responsibility for the violence in the wake of Hundessa’s murder. 

Platform-level neglect is one of the many forms of digital extractivism described by Neema Iyer in her 2021 paper, Automated Imperialism, Expansionist Dreams: Exploring Digital Extractivism in Africa. Iyer, a fellow in the Digital Civil Society Lab at Stanford PACS, is the founder and director of Pollicy, a civic technology organization based in Uganda. She currently leads the design of a number of projects focused on building data skills, fostering conversations on data privacy and digital security, and innovating around policy.

In this interview, Iyer defines digital extractivism, explains its harmful impacts, and highlights recommendations for policymakers and developers to mitigate its risks.

Your paper identifies various forms of digital extractivism observed in Africa. How do you define digital extractivism

Extractivism refers to colonial practices that have existed over the last several hundred years with the goal of wealth and resource accumulation, regardless of the oppressions that geographically separate taker and giver. Digital extractivism is a form of extraction that’s made possible by digitization and exacerbated through borderless capitalism.

Nine Types of Digital Extractivism Digital Labor: When tech companies hire workers from locations such as Africa where labor is less expensive and labor laws are weaker. Illicit Financial Flows: The means through which tax payments are minimized or reduced. Data Extraction: When tech companies exploit minimal data protection legislation to collect consumer identifies, behaviors, beliefs, etc., to use for profit by selling to political players or advertisers.  Natural Resource Mining: When the Western world exploits countries’ minerals, labor, and raw materials, to be used primarily by the Western world. Infrastructure Monopolies: The foreign domination of digital infrastructure like internet service, cell service, etc.  Digital Lending: Digital lenders combine extractive financial approaches, data extractivism, and social shaming to trap/extort users. Funding Structures: Venture capitalists exhibit severe bias in funding African startups. Beta Testing: Early tests have often been conducted unethically on vulnerable populations who lack informed consent. The African continent had long been used as a testing ground for experiments in medicine, for example. Platform Governance: Bias in platform guidelines as well as the automated AI application of platform rules, which can discriminate against marginalized groups. 

We identified nine forms of digital extractivism, which are in no way exhaustive. These are digital labor, illicit financial flows, data extraction, natural resource mining, infrastructure monopolies, digital lending, funding structures, beta testing, and platform governance. 

You provide historical context for today’s digital extractivism by comparing it to the extractivist practices and policies of the colonial era. How are these practices similar? 

We could equate the rise of the importance of data to the value of diamonds in South Africa in the 1860s, and to agricultural commodities in both West and Central Africa in the mid-1800s. These extractive practices forced communities to excessively mine or farm for the sole purpose of supplying external markets in Europe. The products were produced in one place, but transported a great distance away from local markets. This led to the underdevelopment of local communities that is still seen today in the form of massive economic inequality. Because of extractive practices, there is a continued legacy on the continent of weak market linkages, weak institutions, and ongoing poverty. 

Digital extractivism is similar in that foreign technology companies prevent the development of local tech ecosystems, disrupt labor markets, and cause political strife. We see this in the Democratic Republic of Congo with the increasing demand for coltan (used to produce tantalum capacitors in cell phones). And throughout the continent, we observed data transfer without appropriate consent, and neglectful practices with weak platform governance that caused immense harm. 

What are some of the main findings in your research? 
The sheer scale of some of the facts we came across was astounding. For instance, a white founder is 47,000% more likely to be funded in Kenya than in the United States. Chinese corporations ZTE and Huawei developed the majority of the continent’s network infrastructure: 50% of 3G systems used by African telcos were built by Huawei, and 20% to 30% were built by ZTE. Huawei has built up 70% of 4G networks and is likely to build all 5G networks. 

Lendtech companies can charge interest rates as high as 365%–876%. Another disturbing finding was that the power asymmetry between humanitarian agencies and aid recipients blurs lines of consent, allowing vulnerable populations within African countries to be guinea pigs for emerging tech without any risk assessment. An example of this is the testing of biometric technologies on refugee populations, where a miscategorization could negatively impact the ability to access resources. 

What did you learn that surprised you the most? 

One of the most surprising findings to me was within the chapter on illicit financial flows. I ended up reading Trade Is War by Yash Tandon and learned about the immense role tariffs play in economic growth, poverty, and human rights in our globalized world. The OECD and the United Nations Economic Commission for Africa (UNECA) report that the annual tax avoidance losses in Africa are somewhere between $50 billion to $80 billion and $89 billion, respectively, which actually exceeds the value of development aid given to Africa. 

With proper taxation systems, Africa could free itself from the grip of neocolonial practices of the Western world, which continue to weaken our political systems and keep us in a cycle of poverty.

Can you highlight some examples of digital extractivism where AI is employed?

One example is the invisible labor of workers in the Global South in content moderation, annotation, and labeling of datasets. Corporations with the power to deploy this AI can exploit cheaper labor in Africa while spreading impressions about the capability of their products. An example of this is Meta’s hiring of underpaid, inhumanely treated content moderators in Kenya for the moderation of African content. (See a recent Reuters story on the subject.)

Also, since AI is so reliant on data, data becomes a commodity to be extracted and sold for profit by large tech companies who need data for commercial purposes. The enrichment of non-African corporations with our data, without ensuring the return of equivalent profits, is reminiscent of colonization.

What are the risks and impacts of AI being used for digital extractivism?

The impacts are far and wide. The misuse of user data has been linked to the unethical manipulation of wide swaths of the population, like interfering with democratic elections and protests. Related to this is the suppression of information from minority and vulnerable groups online, leading to shadow banning individuals on these platforms. 

Another finding was that many African cities have partnered with foreign companies to collect biometric data – like fingerprints – in order to improve the accuracy of AI surveillance tools. This means that vulnerable populations could be more effectively surveilled, which is both a breach of privacy and an assault on the right to dignity. 

What policies could mitigate the harmful, extractive impacts of AI on the African population? 

Our paper highlights a number of recommendations for how to potentially tackle these extractive processes; for example, strengthening consumer rights within the continent or working within the African Continental Free Trade Area (AfCFTA), which is a free trade area among 54 of the 55 African Union nations. This could provide a regulatory framework, appropriate knowledge transfer, and ways to promote digital fair trade. 

There is also an urgent need for digital sovereignty rather than being forced to once again play by imperialist rules. The Convention on Cyber Security and Personal Data Protection of the African Union (known as the Malabo Convention) can be improved to include a deep understanding of digital sovereignty that critically analyzes the impacts of digital extractivism. This would also promote cross-border data flows, freedom of information (particularly against rampant internet shutdowns across the continent), and laws that encourage innovation.

What can AI designers and developers do to ensure their technologies are not unintentionally extractive? 

At Pollicy, we strongly believe that developers should work toward creating tech that is grounded in ethics and focused on equity. Technology is a massive resource for solidarity. Current capitalist frameworks are great for short-term growth, but harmful due to the legacy of colonialism. Preventing harmful AI practices will require risk assessments, pathways for public consultation, and Afrofeminist frameworks that center on ethics. Developers need to constantly research impacts, but also create a future with beneficial tech. 

There is still room on the continent for speculative futurism that is not apocalyptic in nature. There is a lot of technopessism, but we can also make space for joy.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.

More News Topics