Skip to main content Skip to secondary navigation
Page Content

How Culture Shapes What People Want from AI

Stanford researchers explore how to build culturally inclusive and equitable AI by offering initial empirical evidence on cultural variations in people’s ideal preferences about AI. 

Image
Two men on a bench - one mature European and one young Asian with their mobile phones

iStock/Carsten Brandt

If AI agents could play an active role in our human social life rather than simply operating in the background, would that be desirable? The answer may depend on your cultural perspective. When a team of Stanford researchers applied cultural psychology theory to study what people want from AI, they found clear associations between the cultural models of agency that are common in cultural contexts and the type of AI that is considered ideal. 

Today, the prevailing view in AI development often assumes people desire control over the technology, treating AI as a tool in service of individual goals and concerns. It is an impersonal, hierarchical relationship. But Stanford psychology researcher Xiao Ge and postdoc researcher Chunchen Xu say this view is not the way people everywhere think about AI, but instead it reflects the cultural model of agency that is prevalent in many European American middle-class cultural contexts. 

According to their research, a broader look at cultures across diverse groups suggests that many people have a different view of the role that AI could play. Some envision AI with a higher capacity to influence its surroundings (e.g., some people want AI to have feelings, emotions, and autonomy). In this interpretation, for example, some imagine intelligent machines that can act spontaneously and participate in their social situations.

“There is an urgent need to incorporate the perceptions, imaginings, concerns, and creativity of diverse groups in future AI developments. We want to enable AI stakeholders to increase representation of different worldviews in the design and use of AI, so that it can fulfill the needs of wider segments of the population,” Ge explains.

Drawing on independent and interdependent cultural models, and the results of two online surveys, the team has developed a theoretical framework for understanding people’s ideal relationship with AI, which they presented in a new paper, “How Culture Shapes What People Want from AI.” Supported by the Stanford Institute for Human-Centered AI’s Seed Grant Program, this work is spurring important conversations about the role of culture in defining mainstream conceptions of AI. 

The Foundations of Culture in HCI 

Many human-computer interaction (HCI) studies have investigated the impact of technology on people in different countries; however, few researchers to date have tried to “flip the conversation” to look at how culture can affect AI design or how AI products reflect cultural ideas.

“When HCI researchers consider culture, it tends to be at the later stages of development—for example, in terms of useability or user interface design. But our findings suggest that cultural factors may even shape the initial creation and design of technology as well as what designers imagine its potential benefits and outcomes to be,” says Jeanne Tsai, professor of psychology, director of the Stanford Culture and Emotion Lab, and one of the paper’s co-authors.

Read the full study, How Culture Shapes What People Want from AI

 

To arrive at a deeper understanding of culture and AI, the researchers applied a well-established cultural psychology framework for depicting variations in how different cultures tend to view “the self” and its relation to surrounding environments. In the independent model, individuals view themselves as unique and separate from others and the socio-physical context. By contrast, the interdependent model holds that everyone is connected fundamentally to other humans, as well as to their physical and social environments.

This framework also implicates whether individuals want and expect the environment to influence them. Researchers refer to this factor as having “capacities to influence.” People in some cultures tend to see the environment as a source of influence to guide their thoughts, feelings, and behaviors, while others are less inclined to view their surrounding environments as an active source of agency.

“Using these two dimensions can help us discover ideal human-AI interactions according to different cultures, and they can be quite different than those that immediately come to mind in our predominantly individualist, middle-class contexts,” says co-author Hazel Rose Markus, professor of psychology and faculty director of the Stanford SPARQ behavioral science center.

Through the Lens of Two Cultural Models 

Applying this cultural psychology framework, prior studies in behavioral science have established:

  • People in European American cultural contexts tend to embrace an independent model of agency and see the person as a bigger source of influence than the environment. This cultural model represents people as more active, alive, capable, and in control than their environments. Furthermore, people will aim to change their environments to be more consistent with their preferences, desires, and beliefs.
  • People in Chinese cultural contexts tend to favor an interdependent model of agency. Accordingly, they may view the boundaries between people and their surrounding environments as permeable and malleable. In this context, people may conceptualize the social and physical environment as encompassing them and prefer that the environment be more active, alive, and capable of exerting influence on people.
  • People in African American cultural contexts adopt elements of both cultural models, and their preferences may be influenced by the experience with switching between predominantly independent contexts and predominantly Black contexts, which are often more interdependent. 

Against this backdrop, the Stanford researchers hypothesized that European Americans would seek control over AI more than Chinese respondents, while Chinese participants would seek connection with AI more than European Americans. And, if prior patterns held, African American preferences to control and connect with AI would lie between those of European American and Chinese cultures.

Similarly, on the topic of the environment influencing individuals, the team expected to find European Americans less likely to want AI to have influencing characteristics and Chinese more likely to prefer these characteristics, while African Americans’ preference would fall between the two.

Testing Theoretical Assumptions

To test the hypotheses, the team first created a survey to confirm that these three cultural groups do in fact adopt different models of the self and its relationship to the environment. 373 participants looked at seven variations of a graphic representing the relationship between the self and the environment, ranging from “The environment strongly influences the person” to “The person strongly influences the environment” and selected the picture that best described their ideal balance between the two. 

three charts showing different approaches to how much environment influences a person

The pictorial 7-pt. scale (only 1, 4 and 7 are presented for illustration purposes) used to measure the ideal level and direction of influence between the self and the environment in the pilot study. 1 = “The environment strongly influences the person,” 7 = “The person strongly influences the environment.”

As expected, results of this study revealed cultural differences in the ideal level and direction of influence between the self and the environment.

Based on these initial findings, the team examined how these cultural models affected people’s preferences about AI. They fielded a study in which 348 participants read a short description of AI and then saw one of six randomly assigned scenarios of different AI applications in home management, well-being, teamwork, education, wildfire conservation, and manufacturing contexts. (For example, one scenario read: “Imagine that in the future a well-being management AI is developed to gather information about people’s physical and mental health conditions. It makes customized predictions and decisions to improve people’s well-being management.”) 

In the last step of the study, participants answered a list of questions about their preferences for AI in an ideal situation. The questions were designed to align with the independent/interdependent cultural models, as well as core HCI characteristics such as AI’s autonomy and its perceived emotionality.

Finding Evidence in Support of the Hypotheses

After analyzing the data, the research team suggested that culture shapes both people’s conceptions of what it means to be human and what people desire in their interactions with AI. Specifically, compared to European Americans, Chinese participants regarded it as less important to control AI but more important to have a sense of connection with AI. Meanwhile, European Americans preferred AI to have less capacity to influence — with less autonomy, spontaneity, and emotion.

African Americans aligned with European Americans in wanting control over AI, but they fell between European Americans and Chinese in terms of their desire to connect with AI. African American preferences for the optimal level of influencing capacity for AI was between European Americans and Chinese, as predicted. Notably, the researchers found that while Chinese participants placed the lowest importance of the three cultures on having control over AI, their average score still reflected a desire to have some control over the technology.

“There is a gold rush underway to optimize every urban function, from education to healthcare to banking, but there’s a serious lack of reflection and understanding of how culture shapes these conceptions,” says Ge. “Our work is filling an important gap in the literature, as well as in the practice of AI development.” 

The team acknowledges several limitations inherent in this preliminary approach that could be explored in future studies: 

  • Sample sizes were relatively low for the surveys. 
  • The definition of AI was intentionally very broad. Understanding how people feel about specific AI—chatbots or decision algorithms, for example—could provide even more insight.
  • The study didn’t examine whether people’s reported preferences align with actual interactions with AI.

As a next step, they would like to focus on establishing the reliability and validity of the new measures they devised to capture people’s ideal models of self in relation to their environments. 

Contributions to the Field of HCI

According to co-authors Markus and Tsai, these findings present exciting new insights for the field of human-computer interaction. With this work, the team shows that it is possible to develop rigorous and systematic empirical approaches to examine culturally shaped preferences about AI purposes, forms, and functions. They also shed light on the importance of recognizing the implicit cultural defaults that are built into current models of human-computer interaction. From the perspective of many Western contexts, it is hard to imagine agency as shared or as outside the person. Outside Western contexts, this is possible, even obvious. 

“If we continue to rely on preexisting cultural models, we are likely to limit creativity and the potential of AI to improve the human condition across the globe,” says Xu. On the other hand, it is easy to imagine that if developers begin to rethink human agency and tap into a wider variety of cultural ideas, a new era of innovation could unfold, broadening AI’s potential societal and environmental benefits. 

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics