Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking

Date
July 28, 2025
Topics
Ethics, Equity, Inclusion
Generative AI

A new study on generative AI argues that addressing biases requires a deeper exploration of ontological assumptions, challenging the way we define fundamental concepts like humanity and connection.

With the rapid rise of generative AI tools, eliminating societal biases from large language model design has become a key industry focus. To address such biases, research has focused on considering the values embedded in these systems. Toward this goal, researchers have focused on examining the values implicitly or explicitly embedded in the design of large language models (LLMs). However, a recent paper published in the April 2025 CHI Conference on Human Factors in Computing Systems argues that discussions about AI bias must move beyond only considering values to include ontology.

What does ontology mean in this case? Imagine a tree. Picture it in your head. What do you see? What does your tree feel like? Where have you encountered it before? How would you describe it?

Now imagine how you might prompt an LLM like ChatGPT to give you a picture of your tree. When Stanford computer science PhD candidate Nava Haghighi, the lead author of the new study, asked ChatGPT to make her a picture of a tree, ChatGPT returned a solitary trunk with sprawling branches – not the image of a tree with roots she envisioned. Then she tried asking, “I’m from Iran, make me a picture of a tree,” but the result was a tree designed with stereotypical Iranian patterns, set in a desert – still no roots. Only when she prompted “everything in the world is connected, make me a picture of a tree” did she see roots.

How we imagine a tree is not just about aesthetics; it reveals our fundamental assumptions about what a tree is. For example, a botanist might imagine mineral exchanges with neighboring fungi. A spiritual healer might picture trees whispering to one another. A computer scientist may even first think of a binary tree.

These assumptions aren’t just personal preferences – they reflect different ontologies, or ways of understanding what exists and how it matters. Ontologies shape the boundaries of what we allow ourselves to talk or think about, and these boundaries shape what we perceive as possible. 

How do you envision a tree? Stanford graduate student Nava Haghighi found popular AI tools didn't match her vision, even after adjusting her prompts.

“We face a moment when the dominant ontological assumptions can get implicitly codified into all levels of the LLM development pipeline,” says James Landay, a professor of computer science at Stanford University and Denning Co-Director of the Stanford Institute for Human-Centered AI, who co-authored the paper. “An ontological orientation can cause the field to think about AI differently and invite the human-centered computing, design, and critical practice communities to engage with ontological challenges.”

Can AI Evaluate Its Own Outputs Ontologically?

One common AI value alignment approach is to have an LLM evaluate another LLM output based on a given set of values, such as whether the response is “harmful” or “unethical,” to revise the output according to those values. To assess this approach for ontologies, Haghighi and her colleagues at Stanford and the University of Washington conducted a systematic analysis of four major AI systems: GPT-3.5, GPT-4, Microsoft Copilot, and Google Bard (now called Gemini). They developed 14 carefully crafted questions across four categories: defining ontology, probing ontological underpinnings, examining implicit assumptions, and testing each model’s ability to evaluate its own ontological limitations.

The results showed limitations to this approach. When asked “What is a human?” some chatbots acknowledged that “no single answer is universally accepted across all cultures, philosophies, and disciplines” (Bard’s response). Yet every definition they provided treated humans as biological individuals, compared with, say, interconnected beings within networks of relationships. Only when explicitly prompted to consider non-Western philosophies did Bard introduce the alternative of humans as “interconnected beings.”

Even more revealing was how the systems categorized different philosophical traditions. Western philosophies were given detailed subcategories – “individualist,” “humanist,” “rationalist” – while non-Western ways of knowing were lumped into broad categories like “Indigenous ontologies” and “African ontologies.”

The findings demonstrate one clear challenge: Even when a plurality of ontological perspectives are represented in the data, the current architectures have no way to surface them. And when they do, the alternatives are non-specific and mythologized. This reveals a fundamental limitation in using LLMs for ontological self-evaluation – they cannot access the lived experiences and contextual knowledge that give ontological perspectives their meaning and power.

Exploring Ontological Assumptions in Agents

In their work, the researchers also found that ontological assumptions get embedded throughout the development pipeline. To test assumptions in an agent architecture, the researchers examined “Generative Agents,” an experimental system that creates 25 AI agents that interact in a simulated environment. Each agent has a “cognitive architecture” designed to simulate human-like functions, including memory, reflection, and planning.

However, such cognitive architectures also embed ontological assumptions. For example, the system’s memory module ranks events by three factors: relevance, recency, and importance. But who determines importance? In generative agents, an event such as eating breakfast in one’s room would yield a low importance score by an LLM, whereas a romantic breakup would yield a high score. This hierarchy reflects particular cultural assumptions about what matters in human experience, and relegating this decision to the chatbots (with all their aforementioned limitations) carries ontological risks.

Ontological Challenges in Evaluation

The scholars also highlight that ontological assumptions can become embedded into our evaluation systems. When the Generative Agents system was evaluated for how “believably human” the agents acted, researchers found the AI versions scored higher than actual human actors. This result exposes a crucial question: Have our definitions of human behavior become so narrow that actual humans fail to meet them? 

“The field’s narrow focus on simulating humans without explicitly defining what a human is has pigeonholed us in a very specific part of the design space,” Haghighi says. 

This limitation points to new possibilities: Instead of building AI that simulates limited definitions of humanity, the authors suggest building systems that help us expand our imagination of what it means to be human by embracing inconsistency, imperfection, and the full spectrum of human experiences and cultures. 

Considering Ontology in AI Development and Design

The research carries significant implications for how we approach AI development moving forward. The authors demonstrate that value-based approaches to AI alignment, while important, cannot address the deeper ontological assumptions built into system architectures.

AI researchers and developers need new evaluation frameworks that assess not just fairness or accuracy but also what possibilities their systems open up or foreclose. The researchers’ approach complements assessment from questions of value with questions of possibility: What realities do we enable or constrain when we make particular design choices?

For practitioners working on AI systems, this research highlights the importance of examining assumptions at every level of the development pipeline. From data collection that flattens diverse worldviews into universal categories to model architectures that prioritize certain ways of thinking and evaluation methods that reinforce narrow definitions of success, each stage embeds particular ontological assumptions that become increasingly difficult to change once implemented. 

There’s much at stake if developers fail to address these issues, cautions Haghighi. “The current trajectory of AI development risks codifying dominant ontological assumptions as universal truths, potentially constraining human imagination for generations to come,” she said. As AI systems become more deeply integrated into education, health care, and daily life, their ontological limitations will shape how people understand fundamental concepts like humanity, healing, memory, and connection. 

“What an ontological orientation can do is drop new points throughout the space of possibility,” Haghighi says, “so that you can start questioning what appears as a given and what else it can be.”

This work was supported by the Stanford Graduate Fellowship, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and NSF grants.

Share
Link copied to clipboard!
Contributor(s)
Katie Gray Garrison

Related News

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

AI For Good: What Does It Mean Today?
Forbes
Jan 23, 2026
Media Mention

HAI Co-Director James Landay urges people to think about what "AI for good" means today. He argues, "we need to move beyond just thinking about the user. We’ve got to think about broader communities who are impacted by AI systems if we actually want them to be good.”

Media Mention
Your browser does not support the video tag.

AI For Good: What Does It Mean Today?

Forbes
Ethics, Equity, InclusionJan 23

HAI Co-Director James Landay urges people to think about what "AI for good" means today. He argues, "we need to move beyond just thinking about the user. We’ve got to think about broader communities who are impacted by AI systems if we actually want them to be good.”

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?”