Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Machine Learning | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Machine Learning

All Work Published on Machine Learning

AI Seeks Out Racist Language in Property Deeds for Termination
Bloomberg Law
Oct 17, 2024
Media Mention

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

AI Seeks Out Racist Language in Property Deeds for Termination

Bloomberg Law
Oct 17, 2024

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

Machine Learning
Regulation, Policy, Governance
Foundation Models
Law Enforcement and Justice
Media Mention
I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks
TIME
Oct 13, 2024
Media Mention

Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.

I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks

TIME
Oct 13, 2024

Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.

Machine Learning
Generative AI
Media Mention
On GPS: The Birth Of Modern Artificial Intelligence
CNN
Sep 01, 2024
Media Mention

Fareed Zakaria speaks with “Godmother of AI” Fei-Fei Li about her journey as a computer scientist and how it influenced the discovery of modern AI.

On GPS: The Birth Of Modern Artificial Intelligence

CNN
Sep 01, 2024

Fareed Zakaria speaks with “Godmother of AI” Fei-Fei Li about her journey as a computer scientist and how it influenced the discovery of modern AI.

Computer Vision
Robotics
Machine Learning
Media Mention
Stanford HAI Announces Hoffman-Yee Grants Recipients for 2024
Nikki Goth Itoi
Aug 21, 2024
Announcement

Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.

Stanford HAI Announces Hoffman-Yee Grants Recipients for 2024

Nikki Goth Itoi
Aug 21, 2024

Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.

Design, Human-Computer Interaction
Healthcare
Natural Language Processing
Machine Learning
Announcement
Meta’s New Llama 3.1 AI Model Is Free, Powerful, And Risky
WIRED
Jul 23, 2024
Media Mention

With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.

Meta’s New Llama 3.1 AI Model Is Free, Powerful, And Risky

WIRED
Jul 23, 2024

With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.

Generative AI
Natural Language Processing
Machine Learning
Foundation Models
Media Mention
TextGrad: AutoGrad for Text
Federico Bianchi, James Zou
Jun 19, 2024
News
Your browser does not support the video tag.

Scholars develop a new framework that optimizes compound AI systems by backpropagating large language model feedback.

TextGrad: AutoGrad for Text

Federico Bianchi, James Zou
Jun 19, 2024

Scholars develop a new framework that optimizes compound AI systems by backpropagating large language model feedback.

Machine Learning
Your browser does not support the video tag.
News
8
9
10
11
12