Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
As Large Language Models (LLMs) become increasingly integrated into our everyday lives, understanding their ability to comprehend human mental states becomes critical for ensuring effective interactions. However, despite the recent attempts to assess the Theory-of-Mind (ToM) reasoning capabilities of LLMs, the degree to which these models can align with human ToM remains a nuanced topic of exploration. This is primarily due to two distinct challenges: (1) the presence of inconsistent results from previous evaluations, and (2) concerns surrounding the validity of existing evaluation methodologies. To address these challenges, we present a novel framework for procedurally generating evaluations with LLMs by populating causal templates. Using our framework, we create a new social reasoning benchmark (BigToM) for LLMs which consists of 25 controls and 5,000 model-written evaluations. We find that human participants rate the quality of our benchmark higher than previous crowd-sourced evaluations and comparable to expert-written evaluations. Using BigToM, we evaluate the social reasoning capabilities of a variety of LLMs and compare model performances with human performance. Our results suggest that GPT4 has ToM capabilities that mirror human inference patterns, though less reliable, while other LLMs struggle.
As Large Language Models (LLMs) become increasingly integrated into our everyday lives, understanding their ability to comprehend human mental states becomes critical for ensuring effective interactions. However, despite the recent attempts to assess the Theory-of-Mind (ToM) reasoning capabilities of LLMs, the degree to which these models can align with human ToM remains a nuanced topic of exploration. This is primarily due to two distinct challenges: (1) the presence of inconsistent results from previous evaluations, and (2) concerns surrounding the validity of existing evaluation methodologies. To address these challenges, we present a novel framework for procedurally generating evaluations with LLMs by populating causal templates. Using our framework, we create a new social reasoning benchmark (BigToM) for LLMs which consists of 25 controls and 5,000 model-written evaluations. We find that human participants rate the quality of our benchmark higher than previous crowd-sourced evaluations and comparable to expert-written evaluations. Using BigToM, we evaluate the social reasoning capabilities of a variety of LLMs and compare model performances with human performance. Our results suggest that GPT4 has ToM capabilities that mirror human inference patterns, though less reliable, while other LLMs struggle.

Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.
Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.

With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.
With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.
Stanford HAI Senior Fellow Daniel E. Ho comments on his research on legal hallucinations in large language models and the viability of using similar models for judicial interpretation.
Stanford HAI Senior Fellow Daniel E. Ho comments on his research on legal hallucinations in large language models and the viability of using similar models for judicial interpretation.

A new study adapts large language models to summarize clinical documents, showing a promising path for AI to improve clinical workflows and patient care.
A new study adapts large language models to summarize clinical documents, showing a promising path for AI to improve clinical workflows and patient care.


In risk modeling, AI researchers take a more-is-better approach to training data, but a new study argues that a less-is-more approach may be preferable.
In risk modeling, AI researchers take a more-is-better approach to training data, but a new study argues that a less-is-more approach may be preferable.
