Skip to main content Skip to secondary navigation
Page Content
Image
Timnit Gebru speaks from a podium during a recent Stanford event.

Timnit Gebru discusses the path to community-rooted AI research at a recent Stanford Center for African Studies event. | Rod Searcey

For years, computer scientist Timnit Gebru has been voicing concerns about fairness in AI. She pointed out the field’s lack of diversity in 2015 and, two years later, responded by co-founding the organization Black in AI. While a postdoctoral researcher at Microsoft, she investigated and wrote about bias in facial recognition software. After her term at Microsoft she went on to co-lead a Google team focused on the ethics of artificial intelligence. There she raised a number of red flags — and was publicly and controversially fired in December 2020.

“What I’ve realized is that we can talk about the ethics and fairness of AI all we want, but if our institutions don’t allow for this kind of work to take place, then it won’t,” Gebru says. “At the end of the day, this needs to be about institutional and structural change. If we had the opportunity to pursue this work from scratch, how would we want to build these institutions?”

Speaking at a recent Center for African Studies (CAS) in the School of Humanities and Sciences event, co-sponsored by Stanford HAI, Gebru answered her own question. She discussed the animating principles of her organization, the Distributed AI Research Institute (DAIR), which supports independent, community-rooted AI research broadly and is currently prioritizing work that benefits Black people in general (Africa and the African diaspora). CAS is hosting a year-long speaker series on tech in Africa.

“If we want AI that benefits our communities,” Gebru says, “then what kind of processes should we follow?”

Spread the Power

Two weeks before leaving Google, Gebru recalls, she hired a colleague from Morocco who eagerly tried to raise awareness about the abuses of social media in his home country. He talked with others about the Moroccan government’s use of social media to harass citizens and journalists. He described the imprisonment of his friends. But nothing changed.

“How can we talk about ethics or responsibility when we have these companies that can simply say, ‘Sorry, I don’t care about Morocco,’ ” Gebru says. “Even at places like Stanford, we have too much concentrated power that is impacting the world, and yet the world has no opportunity to affect how technology is being developed.”

Foundational to the work of DAIR is an effort to fracture this concentration of power and build instead a decentralized and local base of expertise. For example, Raesetje Sefala, a research fellow at DAIR who is using computer vision to analyze the geographic evolution of apartheid in South Africa, is based in Johannesburg and from the Lebowakgomo township in the Limpopo province in South Africa. “This is a very personal project for her,” Gebru says.

Likewise, Meron Estifanos, also a research fellow, has spent her career advocating for refugees in the Sinai. She has no background in the technical details of AI but understands deeply both how AI systems harm refugees and how social media is used as a tool of surveillance and harassment. (She herself is often subject to this harassment.)

“Typically, some researcher would come in, extract her knowledge on this subject, and then go publish papers. And would Meron get the fame and fortune for these things? No,” Gebru says. “I want to build my institute so that she is a research fellow, she gets paid what she needs to get paid, and it is her name on the work.”

This approach devolves power to those who often don’t have it; it prevents brain drain by keeping local experts on the ground; and it counters what Gebru considers the dangerous machine learning standard of building single, universal models.

“There is a push for generality in machine learning,” she says. “I’m totally against that: we have to work context by context, community by community.”

Timnit Gebru spoke before an audience at the Stanford Center for African Studies.
"If we want AI that benefits our communities, then what kind of processes should we follow?" asks AI expert Timnit Gebru to an audience at the Stanford Center for African Studies. | Rod Searcey

Community, Not Exploitation

“One of the biggest issues in AI right now is exploitation,” Gebru says. For example, she says, in an office building on the outskirts of Nairobi, Kenya, roughly 200 men and women review endless reels of violent media working as subcontracted content moderators for Meta (formerly Facebook). In return, they get paid as little as $1.50 per hour; many suffer from mental trauma.

Content moderators are but one instance of this exploitation. Many people who annotate data, she said, are refugees who cannot advocate for themselves and are poorly paid.

DAIR is leading on this front not only by paying its workers fair wages but by supporting more holistically healthy research norms. Gebru described the field of AI as defined by constant stress and the ongoing pressure to produce new publications amid tight deadlines. Gebru is focused instead on promoting balance in the lives of her employees and in slowing the metabolism of AI research. Rather than sprinting to get publications or tenure, she wants to pursue questions without immediate payback.

Finally, building an effective research community means that neither individual egos nor unexamined conventions hold priority. “This work not only doesn’t exploit communities but it must be willing to uncover the harms of AI without fear of persecution,” Gebru says. “I tried to do that at Google, and look what happened.”

A New Origin Story

The evolutionary history of AI is usually traceable to the military. The technology for self-driving cars, for instance, is rooted in the Defense Advanced Research Projects Agency (DARPA) search for autonomous weapons systems. From there it moved into the hands of corporations interested in maximizing profits. Almost as an afterthought do people question the social value of these innovations.

It’s as though we build a tank, commercialize it, and then start to ask if it can be retrofitted to serve agricultural ends. “But if we want to build technology that helps people, we need to start with the process,” Gebru says. “Let’s not build a tank in the first place.”

Beyond a reengineered process, she is adamant that AI doesn’t belong everywhere, despite the fatalistic way in which it gets discussed. She cited professor Chris Gilliard, who likes to remind people that when we found out the damaging health effects of asbestos, we didn’t deem its use inevitable; we regulated it. Likewise, technology — which may feel inevitable — is not.

“Just because it’s out there doesn’t mean we can’t get rid of it,” Gebru says. “If we see something harmful, we should be able to say no.”

In the end, Gebru repeatedly voiced skepticism about the utopian vision promised by today’s major tech companies. Referring to the work of journalist Karen Hao, she wondered why we might expect the fruits of AI to be distributed equitably when no technology in history has moved smoothly from “the bastions of power to the have-nots” — not the internet or electricity, not clean water or transportation.

“We shouldn’t just assume that the concentration of power in the AI space is OK, that the benefits will trickle down, that we’ll have techno utopia arriving soon,” Gebru says. Rather than waiting for the future that big tech promises us tomorrow, she suggested we jointly create the world that we want today.

Watch the full conversation:

“Disrupting Big Tech: Independent, Community-Rooted AI Research Focused on Africa and the African Diaspora” took place on May 4 at Stanford University. The event, “Centering Africa - CAS Annual Lecture,” is hosted by the Stanford Center for African Studies in the School of Humanities and Sciences and was co-sponsored by Stanford HAI, Stanford PACS, Stanford McCoy Family Center for Ethics in Society, Stanford Seed, and Stanford Engineering.

 

More News Topics