Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
We examine the prevalence and productivity dynamics of artificial intelligence (AI) in American manufacturing. Working with the Census Bureau to collect detailed large-scale data for 2017 and 2021, we focus on AI-related technologies with industrial applications.
.png&w=1920&q=100)
We examine the prevalence and productivity dynamics of artificial intelligence (AI) in American manufacturing. Working with the Census Bureau to collect detailed large-scale data for 2017 and 2021, we focus on AI-related technologies with industrial applications.
While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.
The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.
.png&w=1920&q=100)
The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.
Machine learning (ML) and AI systems are becoming integral to every aspect of our lives. As these technologies make more decisions for us, and the underlying ML systems become increasingly complex, it is natural to ask: How can I trust machine learning? In this talk, Carlos Ernesto Guestrin will present a framework anchored on three pillars—clarity, competence and alignment—for driving increased trust in ML. For clarity, Guestrin will cover methods to make the predictions of machine learning more explainable. For competence, he will focus on means for evaluating and testing ML models with the same rigor we apply to software products. For alignment, Guestrin will describe the challenges of aligning the behaviors of an AI with the values we want to reflect in the world, along with methods that can yield more aligned outcomes. The discussion will touch on both algorithmic and human processes that can help lead to AIs that are more effective, impactful and trustworthy.
Professor of Computer Science, Stanford University
No tweets available.