Wolfgang Lehrach | Code World Models for General Game Playing
While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.
Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...
.png&w=1920&q=100)
Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...
AI coding agents now complete multi-hour coding benchmarks with roughly 50% reliability, yet a randomized trial found experienced open-source developers took about 19% longer when allowed frontier AI tools than when tools were disallowed...
.png&w=1920&q=100)
AI coding agents now complete multi-hour coding benchmarks with roughly 50% reliability, yet a randomized trial found experienced open-source developers took about 19% longer when allowed frontier AI tools than when tools were disallowed...
Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.
In this talk, I present an approach that moves away from direct prompting, instead using LLMs as program synthesizers to bridge the gap between natural language rules and symbolic world models. The LLM receives a game description and example trajectories, and outputs an executable, symbolic world model (CWM) represented in Python. The trajectories also ensure the rules are correctly captured and aid in refining the CWM if they are not. Note that even trajectories containing only a single player's observations and actions can be used to help validate and refine CWMs. Furthermore, partially observed trajectories also allow comparisons between CWMs via a bound on the likelihood.
Given a CWM, Monte Carlo Tree Search (MCTS) or Reinforcement Learning (RL) methods can play the game, and gameplay can be further enhanced by adding in LLM-derived synthesized value functions. Imperfect information games are handled by having the LLM synthesize inference functions to impute information sets, or by directly training reinforcement learning policies on top of the CWM.