Skip to main content Skip to secondary navigation
Page Content
A volunteer looks at her tablet among other volunteers sorting through boxes.

Over the last decade, AI has reshaped the commercial sector and consumer habits — think recommender systems for online shopping or streaming services. While many businesses have jumped headfirst into artificial intelligence, it’s unclear how the social and education sectors are thinking about this technology. Do they currently use it, are they interested in using it, and how? 

“This sector has an incredibly important role in the AI conversation,” said Stanford Institute for Human-Centered AI Director of Research Programs Vanessa Parli. “We wanted to see how much they’re already engaging with this technology and use that to inform new programs here at HAI.”

To that end, Stanford HAI collaborated with Project Evident to survey nonprofits and philanthropic organizations on their current and future use of AI. The survey found that half of the more than 230 respondents already use AI tools in their work, and 76% believe their organizations would benefit from using more.

“We were surprised and excited to see the opportunity gap,” Parli added. “There is a lot in the media about the significant risks of AI. I thought this sentiment would percolate into the social and education sectors, but among our respondents, there’s an actual hunger to understand how AI can create more opportunity for impact while keeping the risks in mind.”

Here Parli explains the major findings and recommendations to bridge that opportunity gap, as well as Stanford HAI’s goal to build a collaborative cohort of nonprofits, funders, and scholars. 

What was the impetus for this survey? 

The social and education sectors play a critical role in the development of AI. At Stanford HAI, we want to build programming to engage these audiences and to help educate about AI, the benefits, the risks, and how to think about deploying these technologies in nonprofit organizations. This survey was our first step in connecting AI research and the needs of the community.

How are social organizations using AI now, and what do they want from it?

My colleague Haifa Badi-Uz-Zaman [Stanford HAI program manager] and I had heard anecdotally that nonprofits weren’t using AI; however, about 50% of our nonprofit respondents state they use these technologies, mostly for supportive work – operational processes like finance, human resources, contracting, marketing. A good amount of them are also using AI for mission-related work. One example of this here at Stanford is GeoMatch. Funded through the HAI Hoffman-Yee grant program, it is a machine learning tool that uses AI to match refugees to new homes.

Overall, respondents showed interest in AI for predictive analytics, virtual assistants, data collection/analysis, and content creation (generative AI).

What findings surprised you?

Education nonprofits seem to be ahead of the curve. Our survey indicates they are using AI significantly more than all other types of nonprofits. Perhaps that makes sense because there has been a lot of investment in this sector. 

Another thing I found interesting was the size of the opportunity gap. We asked people, what is your belief that your organization could benefit from using more AI? About 75% of respondents believe their organization would benefit from more AI, specifically around mission-related work. 

Additionally, on the grantmakers side, we asked if these organizations have or plan to create a specific AI funding focus area. Most respondents do not have a specific technology grantmaking priority and don’t plan to create one, which means instead of bringing on special AI expertise, they will need to educate their whole organization on the risks and opportunities of these tools.

So if there’s a pretty big opportunity gap here, what are the roadblocks to adoption? 

The No. 1 challenge was concerns about bias. Given the sensitive data these organizations might collect, that makes sense. Also ranking high was a lack of clarity about how AI will help the organization and knowledge about AI within the organization, as well as concerns about cost – for these nonprofit respondents, cost ranked No. 2.

Based on these results, what do you recommend for these organizations?

On the philanthropic side, we recommend investing in the development of scalable resources for grantees. Nonprofits might not have the technical expertise to create resources in-house, or the funds to hire contractors, so that would be really useful.

Also, we recommend nonprofits, funders, and AI researchers collaborate and experiment together. Too often research is siloed in the development process; nonprofits don’t have the opportunity to use the tool until it is commercialized. Bringing all these groups into conversation as early as possible ensures these tools are developed in ways that align to nonprofits’ missions and focus on equity.

For nonprofits, we advise they be savvy and engaged buyers of these tools. By working with other nonprofits using these tools and learning how bias manifests, they can define the minimum threshold for the technology companies to address bias in their products. Essentially this would be creating collective action among nonprofits to lessen the burden on one single entity to define whether a tool has strong enough guardrails. 

How will Stanford HAI support these organizations? 

Haifa and I are looking to create some of these education resources in collaboration with nonprofits and funders. We invited many of our survey respondents and other social sector organizations to a convening this week to start the conversation around what resources they might need. We also plan to develop a program to match nonprofits with some of our researchers early on in the process to collaboratively design and develop these tools.

Read the working paper here.

Interested in getting involved in this program? Email Stanford HAI Program Manager Haifa Badi-Uz-Zaman to learn more, or sign up for our email newsletter to stay on top of the latest news from HAI.