HAI Seminar with Sanmi Koyejo
Beyond Benchmarks | Building a Science of AI Measurement
The widepread deployment of AI systems in critical domains demands more rigorous approaches to evaluating their capabilities and safety. While current evaluation practices rely on static benchmarks, these methods face fundamental efficiency, reliability, and real-world relevance challenges. This talk presents a path toward a measurement framework that bridges established psychometric principles with modern AI evaluation needs. We demonstrate how techniques from Item Response Theory, amortized computation, and predictability analysis can substantially improve the rigor and efficiency of AI evaluation. Through case studies in safety assessment and capability measurement, we show how this approach can enable more reliable, scalable, and meaningful evaluation of AI systems. This work points toward a broader vision: evolving AI evaluation from a collection of benchmarks into a rigorous measurement science that can effectively guide research, deployment, and policy decisions.