During her HAI Fellowship, Avila plans to prototype inclusive and gender-responsive AI systems.
To activist and international human rights lawyer Renata Avila, technology and AI are superpowers that should be deployed to address society’s biggest problems, including social inequality and climate change.
As a non-resident HAI Fellow associated with the Stanford Digital Civil Society Lab in partnership with the Center for Comparative Studies in Race and Ethnicity, Avila has gained access to the Silicon Valley and political elites she hopes will help fast-track the changes society needs. Her starting point: prototyping AI systems that are inclusive by design, feminist by default.
She has spent the last year laying the groundwork for such algorithms, and describes that work in this interview.
How can tech and AI be tapped to address society’s problems?
There is a lot of talk about the risks of technology and AI but very few people talk about using them to massively correct societal problems.
We need to be funding the institutions that will open a new era of public interest technologies and technologies with large-scale social impact.
In the social sphere, we have to produce the research that will lead to viable prototypes that will lead later to viable pilot projects, that will lead at the end of the day to viable scalable products designed to reduce gender inequalities locally and/or globally, and that can implement inclusion. The goal is to massively and rapidly improve the effectiveness of institutions for citizens and to massively reduce systemic inequality.
What work have you been doing to lay the groundwork for developing inclusive algorithms?
During 2020, after the pandemic put a halt to many of my plans, I collaborated with the organization Women at the Table to recruit and cultivate a group of researchers – mostly from the global south, and mostly women – to form the Feminist AI Research Network (f<A+i>r Network) as the research arm of the A+ Alliance. The group includes economists, social scientists, data experts, lawyers, designers, community organizers and techies. We held weekly multidisciplinary virtual discussions around a list of interesting research questions about AI and inclusive algorithms.
The first objective was to understand the landscape – to learn about recent publications in this area and the kinds of conversations people are having about inclusive algorithms. And the second objective was to break down the silos and see how people from different regions and backgrounds analyze the problem and slice it into research questions that we could later answer.
Of course, the work of the A+ Alliance goes beyond research. 2021 will be precisely the year to be very active in diplomacy and policy as well.
What are some examples of inclusive AI?
Inclusive AI is AI that is built with quality, inclusive data that takes into account gender, education, ethnicity, and all of the other economic and social differences that are sometimes determining factors for inequality. This is data that might not be useful for making money, but is extremely useful for making good policy. So the starting point of inclusive AI is to have a New Deal on data – not only to make it respectful of privacy but to make it inclusive when used for social purposes.
Inclusive AI systems that are designed with inclusive data before being deployed might go one step beyond that. They will look at a system where there are evident inequalities and create a filtering mechanism that can lead to a more equitable final outcome, taking into consideration what different kinds of people bring to the table. And with these algorithms you can then push the system to fix a problem that you’ve identified with the data.
What AI is promising is efficiency and speed. But I think we can go further. AI can be also about systems that would have been too complex to implement manually. AI allows a level of granularity of interventions we could not have in the past. So it is truly a superpower if AI is used to fix multiple inequalities at one time.
As an example, imagine a country in which the financial credit system only benefits owners of property and only gives credit to owners of property, but land is 95% owned by men. That means that the financial systems have been configured so that credit is unavailable to a person who answers “no” to the question: “Do you have property or not?” Changing the rules to provide credit for people who don’t own property will make a difference, but actual change will be slow to happen unless AI algorithms for evaluating credit risk are designed to specifically redress past inequalities. With such algorithms in place, governments should be able to increase the property ownership of women by a large percent in a very short period of time. And not only will it mean women gain access to credit, but also that they can have their own businesses, and resist domestic violence because no one can kick them out of their own house. It changes things in significant ways.
Similarly, in many countries, men have identification and women don’t, but ID is required for allocation of cash transfers or public subsidies. Decision makers have therefore, for practical reasons, allocated funds to males when giving public subsidies or cash transfers to families. Studies have shown that when a woman is the head of a family or managing the finances of the house, they are not receiving these benefits. The systems didn’t start with the right dataset, and the processes for providing benefits became institutionalized before automation. With an inclusive dataset that contains enough information about diverse types of beneficiaries, it’s highly likely that the automated system of conditional cash transfers will not only get faster because it’s powered by AI but it will be more feminist.
It is a fact that women do not own anything in most of the countries around the world. That’s the reality. It’s the kind of thing that could take hundreds of years to fix unless we implement systems to massively accelerate the change. And that’s the point of this: to understand the complexities of the system and get the governments to activate processes to get fast at fixing what needs to be fixed.
We want to show that these kinds of concrete things are possible and that they can improve the lives of a lot of people when applied correctly and designed in a collaborative way.
These ideas are intended for implementation by public institutions in the social sphere. That is, perhaps the governments of Guatemala or Tanzania – or other governments with limited resources – might deploy these systems to fix societal problems and produce incredible results.
Where did you get your optimistic view of what technology can accomplish?
I have loved technology since I was a little girl. Ever since I had contact with technology and computers, I always wanted to make them mine. I wanted to own them and shape them to help me do crazy things. To me it was like a superpower. If you can understand it and shape it in a way that will advance your purposes, and if you can have an active relationship with the tech you are using, the possibility of shaping these tools to achieve social change is great news.
We have been moving away from that narrative toward this nasty narrative of the helpless user or recipient, with a very, very powerful actor on the other side. And I don’t think that narrative is helping anybody. I think it is disempowering and we could miss out on what science and tech can do. We shouldn’t renounce the possibility that technologies can make our lives better and address our problems.
The vision of technology has been so corporate and so win-win that we stop thinking about technology as something collective and complex that will help us deal with big problems that affect all of us – from overcoming the huge challenges of the climate crisis to getting to know our societies in a comprehensive way. For example, I was reading about systems that have detected that air quality is linked to economic inequality: The poorer you are, the worse air you have to breathe. And with technology, with the right data and the right systems in place, we can know how these multiple inequalities work and might be ambitious about solving them at a larger scale.
I think that it is time to start a new narrative about technology. To take back the possibilities of technology to the societal.
How is your HAI Fellowship enabling this work?
It is very interesting to be inside the “belly of the beast” because I have only been in Silicon Valley for short periods of time. All my life I have been a complete outsider to those dynamics and way of thinking.
So to me, one of the inspiring things about being inside Stanford has been the access. It enables conversations at a different level than if you are an activist or civil society researcher based in Latin America or Europe. It’s very different outside the system. You cannot see the whole picture. You just see the surface.
The other amazing aspect is direct contact with visionaries from disciplines beyond technology. From scholars to practitioners and artists, HAI brought together an amazing collection of people. Being connected with all of them is certainly shaping my work. And there is also the library – never ever before have I had access to the latest, the newest, the top knowledge being produced. Accessing it is such a privilege.
What would it mean to prototype technology for a democratic future?
We need more and different people in the design room. We need to plug in the people who are closer to the non-digital reality. Fifty percent of humanity is disconnected from the internet, and an even larger percent are excluded from technology generally. And many of the mega AI social products I’m talking about will be intended to address their needs, so it is important to understand the big picture and try to design the system without the misguided principles that left them out: extractivism, economic inequalities, racism. We need to design with better principles in mind.
And, more importantly, we need to actively involve those we want to serve and include. It shouldn’t be “about them, without them.” It can only help when designing these large-scale social interventions to have those whose lives you are experimenting with in the room so they have a say about it. It is not only the ethical thing to do, but to me it is also the thing that will make technology better, and that will make us less afraid of it.
Today, instead of having local and diverse input feeding tech, AI systems are amplifying what is going on in the world. And the world is super unequal, super discriminatory, racist, and sexist. So that cuts off the ability to dream of different principles.
With local reality feeding design and policies, we won’t just translate today into the design, we’ll translate utopia into the design. What if we feed these systems and dream of systems that are inclusive by design, feminist by default? That would be a really daring systemic change.
These prototypes we are co-creating are highly interdisciplinary and highly participatory. They might meet with resistance because things that are participatory take more time and are more complex and more expensive. But the risk of doing it the traditional way is too high. The alternative is crystalizing patriarchy 2.0: maximizing patriarchy – homogeneous, dull, and unresponsive. That’s the tech solutionism alternative.
Why is it important to have women involved in tech?
Like many, I am convinced that people are intelligent and can shape their own tech futures. Especially young feminists. Due to the systemic inequalities shaping our societies, women have been blocked from access to so many areas. We need to take proactive, bold actions to fix it. What I’m trying to do with this project is to open a little crack in the system to get women into the decision rooms where designs and budgets are created and evaluated. To get diverse, feminist women into these systems so they are shaped better at each and every stage of tech development.
One of the things our culture loves is the tech male hero: an Elon Musk or a Steve Jobs. You see very few women whose creations are elevated to that level. I think that is systemic. It is not casual. It’s a mix of access to resources and connections and even access to teams that will trust and invest in your idea.
This is a field that is screaming for affirmative action not just coded in the algorithms but also enabling the creation of technology by a different set of people, and you need resources and the political will to make it happen. And I don’t see it happening in the private sector. Venture capitalists are surrounded by incentives to find the next Mark Zuckerberg and make a lot of money out of it.
So maybe the way to unlock the potential of women in this area is to think about tech not as corporate but as more collective and focused on the public sector, which has both the resources and the ability – if those in power want to – to build different and better technologies.
Maybe a group of women will code the best systems to prevent climate disaster. Or some wonderful women will solve the hunger crisis that will come after COVID. That’s the kind of tech project that is exciting to me. Taking the power of tech together with a different way of thinking to solve the challenges of our times.
Or you can make a little money and go to Mars.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.