The widepread deployment of AI systems in critical domains demands more rigorous approaches to evaluating their capabilities and safety.
The widepread deployment of AI systems in critical domains demands more rigorous approaches to evaluating their capabilities and safety.
2025 Spring Conference
In an era when information is treated as a form of power and self-knowledge an unqualified good, the value of what remains unknown is often overlooked.
In an era when information is treated as a form of power and self-knowledge an unqualified good, the value of what remains unknown is often overlooked.
Software has been "eating the world" for the last ten years. In the last few years, a new phenomenon has started to emerge: machine learning is eating software. That is, machine learning is radically changing how one builds, deploys, and maintains software--leading some to use the loosely defined phrase Software 2.0. Rather than conventional programming, Software 2.0 systems often accept high-level domain knowledge or are programmed by simply feeding them copious amounts of data. We describe the foundational challenges that these systems present including a theory of weak supervision, guiding self-supervised systems, and high-level abstractions to monitor these systems over time. This builds on our experience with systems including Snorkel, Overton, and Bootleg, which are in use in flagship products at Google, Apple, and many more.
Associate Professor of Computer Science, Stanford University
No tweets available.