Skip to main content Skip to secondary navigation
Page Content
Image
A woman talks to her smartphone.

Artificial intelligence technology designers and developers should create products that are accessible and available to all, not just the privileged. | iStock/AH86

Artificial intelligence tools can complete our emails, transcribe our meetings, and personally tailor how we learn a new language. But these technologies aren’t built for all.

“These tools that we’re building to improve human life are being targeted to more privileged populations, leaving underserved populations out of the benefits,” said Jeff Hancock, founding director of the Stanford Social Media Lab and the Harry and Norman Chandler Professor of Communication at Stanford University. “Designers, builders, and developers need to start thinking about these other communities and how they can be served.”

In a recently published study in Computers in Human Behavior, Hancock and his research team examined the gap between the availability and accessibility of AI-mediated communication tools that enable interpersonal communication assisted by an intelligent agent. The researchers hypothesized that adoption of the technology will be positively associated with access, socio-economic factors such as education and annual income, and AI-mediated communication tool literacy.

The Inequities of AI-Mediated Communication Tools

Hancock, an affiliate of the Stanford Institute for Human-Centered AI, defines artificial intelligence-mediated communication as any interpersonal communication modified, augmented, or generated by an agent. That includes auto-complete features in email, voice assistants like Siri or Alexa, or even auto-correct functions on text messages.

To better understand how Americans are using these tools, Hancock and his team conducted an online survey using the crowdsourcing platform Amazon Turk. They queried 519 adults between the ages of 19 and 74, with at least a high school degree or GED, within a range of annual income.

The survey asked users to assess their literacy with six types of AI tools: voice-assisted communication (Amazon Alexa, Apple’s Siri, Google Home, Google Assistant, etc.); personalized language learning (Rosetta Stone, Babel, Duolingo, ELSA Speak, Memrise, etc.); transcription (Otter.ai, Trint, Sonix, Temi, NaturalReader, Dragon, Apple Dictation, etc.); translation (Google Translate, Linguee, etc.); predictive text suggestion (email and message replies, sentence completion); and language correction (auto-correct, spell and grammar check, proofreading). The survey asked them about their familiarity with these tools, their comfort using them, and their confidence with them. It also asked how easily they had access to them and about any barriers to their use.

The Hidden Inequality

The team found that AI-mediated communication technology is “not a monolith” — categories were not used or experienced equally by all users. Out of the six categories, the most widely used AI among the study participants were voice-assisted communication (91.9 percent), language correction (91.8 percent), predictive text suggestion (80.5 percent), and translation (70.2 percent). The least-used AI were personalized language learning (57.2 percent), followed by transcription tools (41.3 percent).

Drilling down, the team found that device and internet access, age, user speech characteristics, and AI tool literacy were barriers to adoption. They saw, for example, that younger, digital native users were more likely to use AI, particularly transcription, while translation tools were more often adopted by those with higher education and lower family income. Their findings also suggest that English speakers with accents struggled more with voice-assisted communication and translation or speech-to-text transcription than unaccented English speakers.

“Sadly, as we might expect, people with lower amounts of income and people with lower levels of education were much less likely to know about these technologies and use or engage with them in their lives,” said Hancock. “It looks like these tools, if not targeted, are being used by wealthier, more educated people­, so these underserved populations are much less likely to use such AI-based tools than more privileged populations.”

The researchers note that the study participants were not perfectly representative of the U.S. population and that future research should focus on the underrepresented groups. Hancock identifies this underserved population as an opportunity and social imperative.

“It’s really important that people making AI tools need to actively consider diverse populations that may have somewhat different needs, but needs nonetheless,” he said. “It’s an opportunity as well as the right thing to do.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics