Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...
.png&w=1920&q=100)
Despite the rapid adoption of LLM chatbots, little is known about how they are used. We approach this question theoretically and empirically, modeling a user who chooses whether to complete a task herself, ask the chatbot for information that reduces decision noise, or delegate execution to the chatbot...
AI coding agents now complete multi-hour coding benchmarks with roughly 50% reliability, yet a randomized trial found experienced open-source developers took about 19% longer when allowed frontier AI tools than when tools were disallowed...
.png&w=1920&q=100)
AI coding agents now complete multi-hour coding benchmarks with roughly 50% reliability, yet a randomized trial found experienced open-source developers took about 19% longer when allowed frontier AI tools than when tools were disallowed...
HAI Weekly Seminar
The world we live in is inherently compositional: just like a sentence is built upon phrases and words, a visual scene comprises a collection of interacting objects and entities, which in turn are derived from the sum of their parts. This compositionality plays a critical role in our ability to understand the world, organize the acquired knowledge through a rich set of concepts, and easily adapt them to novel situations and environments. Essentially, it is considered one of the fundamental building blocks of human intelligence. How to incorporate such compositionality into AI models? How can we encourage neural networks to develop semantic understanding of our surroundings? And how can we leverage the emerging structured knowledge to improve in downstream tasks such as question answering or image generation? These are the questions that will be explored in the talk, in which I will present models for multi-step synthesis of and reasoning over multi-object scenes, describe their key design principles and underlying mechanisms, and illustrate the benefits they offer in terms of enhanced controllability, increased data-efficiency, and improved interpretability of their internal representations and reasoning process.
PhD Student in Computer Science, Stanford University
No tweets available.