Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News

Analyzing nearly 30 million posts, Stanford scholars reveal how emotional, negative content fuels the viral spread of news on social media. Now, what to do about it?
Analyzing nearly 30 million posts, Stanford scholars reveal how emotional, negative content fuels the viral spread of news on social media. Now, what to do about it?

As Large Language Models (LLMs) become increasingly integrated into our everyday lives, understanding their ability to comprehend human mental states becomes critical for ensuring effective interactions. However, despite the recent attempts to assess the Theory-of-Mind (ToM) reasoning capabilities of LLMs, the degree to which these models can align with human ToM remains a nuanced topic of exploration. This is primarily due to two distinct challenges: (1) the presence of inconsistent results from previous evaluations, and (2) concerns surrounding the validity of existing evaluation methodologies. To address these challenges, we present a novel framework for procedurally generating evaluations with LLMs by populating causal templates. Using our framework, we create a new social reasoning benchmark (BigToM) for LLMs which consists of 25 controls and 5,000 model-written evaluations. We find that human participants rate the quality of our benchmark higher than previous crowd-sourced evaluations and comparable to expert-written evaluations. Using BigToM, we evaluate the social reasoning capabilities of a variety of LLMs and compare model performances with human performance. Our results suggest that GPT4 has ToM capabilities that mirror human inference patterns, though less reliable, while other LLMs struggle.
As Large Language Models (LLMs) become increasingly integrated into our everyday lives, understanding their ability to comprehend human mental states becomes critical for ensuring effective interactions. However, despite the recent attempts to assess the Theory-of-Mind (ToM) reasoning capabilities of LLMs, the degree to which these models can align with human ToM remains a nuanced topic of exploration. This is primarily due to two distinct challenges: (1) the presence of inconsistent results from previous evaluations, and (2) concerns surrounding the validity of existing evaluation methodologies. To address these challenges, we present a novel framework for procedurally generating evaluations with LLMs by populating causal templates. Using our framework, we create a new social reasoning benchmark (BigToM) for LLMs which consists of 25 controls and 5,000 model-written evaluations. We find that human participants rate the quality of our benchmark higher than previous crowd-sourced evaluations and comparable to expert-written evaluations. Using BigToM, we evaluate the social reasoning capabilities of a variety of LLMs and compare model performances with human performance. Our results suggest that GPT4 has ToM capabilities that mirror human inference patterns, though less reliable, while other LLMs struggle.
AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate.
AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate.

A new collaboration demonstrates how LLMs can effectively advise those who are offering emotional support to others.
A new collaboration demonstrates how LLMs can effectively advise those who are offering emotional support to others.


Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.
Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.

With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.
With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.