HAI Weekly Seminar with Chris Re | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

HAI Weekly Seminar with Chris Re

Status
Past
Date
Wednesday, January 27, 2021 10:00 AM - 11:00 AM PST/PDT
Topics
Machine Learning

Software 2.0: Machine Learning is Changing Software

Software has been "eating the world" for the last ten years. In the last few years, a new phenomenon has started to emerge: machine learning is eating software. That is, machine learning is radically changing how one builds, deploys, and maintains software--leading some to use the loosely defined phrase Software 2.0. Rather than conventional programming, Software 2.0 systems often accept high-level domain knowledge or are programmed by simply feeding them copious amounts of data. We describe the foundational challenges that these systems present including a theory of weak supervision, guiding self-supervised systems, and high-level abstractions to monitor these systems over time. This builds on our experience with systems including Snorkel, Overton, and Bootleg, which are in use in flagship products at Google, Apple, and many more.

Speaker
Chris Re
Associate Professor of Computer Science, Stanford University

Watch Event Recording

Share
Link copied to clipboard!
Event Contact
Celia Clark
celia.clark@stanford.edu
More from HAI and SDS seminars
  • Inside the 2026 AI Index Report | Stanford HAI
    SeminarMay 20, 202612:00 PM - 1:15 PM
    May
    20
    2026

    The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Related Events

Kristina McElheran | The Rise of Industrial AI in America: Microfoundations of the Productivity J-curve(s)
May 11, 202612:00 PM - 1:00 PM
May
11
2026

We examine the prevalence and productivity dynamics of artificial intelligence (AI) in American manufacturing. Working with the Census Bureau to collect detailed large-scale data for 2017 and 2021, we focus on AI-related technologies with industrial applications.

Event

Kristina McElheran | The Rise of Industrial AI in America: Microfoundations of the Productivity J-curve(s)

May 11, 202612:00 PM - 1:00 PM

We examine the prevalence and productivity dynamics of artificial intelligence (AI) in American manufacturing. Working with the Census Bureau to collect detailed large-scale data for 2017 and 2021, we focus on AI-related technologies with industrial applications.

Wolfgang Lehrach | Code World Models for General Game Playing
SeminarMay 13, 202612:00 PM - 1:15 PM
May
13
2026

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

Seminar

Wolfgang Lehrach | Code World Models for General Game Playing

May 13, 202612:00 PM - 1:15 PM

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

Arvind Narayanan | Adapting to the Transformation of Knowledge Work
May 18, 202612:00 PM - 1:00 PM
May
18
2026

The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.

Event

Arvind Narayanan | Adapting to the Transformation of Knowledge Work

May 18, 202612:00 PM - 1:00 PM

The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.