Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
News | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to News

All HAI Media Mentions

NewsMedia Mentions
The Economist
Media Mention
Fei-Fei Li Says Understanding How The World Works Is The Next Step For AI
Nov 20

Stanford HAI co-director Fei-Fei Li says the next frontier in AI lies in advancing spatial intelligence. In this op-ed, she explains how enabling machines to perceive and interact with the world in 3D can unlock human-centered AI applications for robotics, healthcare, education, and beyond.

The Economist

Fei-Fei Li Says Understanding How The World Works Is The Next Step For AI

Nov 20

Stanford HAI co-director Fei-Fei Li says the next frontier in AI lies in advancing spatial intelligence. In this op-ed, she explains how enabling machines to perceive and interact with the world in 3D can unlock human-centered AI applications for robotics, healthcare, education, and beyond.

Media Mention
The Stanford Daily
Media Mention
At Five Years Old, Institute For Human-Centered AI Looks To The Future
Nov 05

This year, Stanford HAI celebrated the fifth year since its founding. Looking forward, the institute aims to continue fostering a better understanding of AI's impacts on society.

The Stanford Daily

At Five Years Old, Institute For Human-Centered AI Looks To The Future

Nov 05

This year, Stanford HAI celebrated the fifth year since its founding. Looking forward, the institute aims to continue fostering a better understanding of AI's impacts on society.

Media Mention
Tech Brew
Media Mention
Are Open-Source AI Models Worth The Risk?
Oct 31

Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide. 

Tech Brew

Are Open-Source AI Models Worth The Risk?

Oct 31

Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide. 

Media Mention
KQED
Media Mention
Stanford AI Model Helps Locate Racist Deeds In Santa Clara County
Oct 21

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

KQED

Stanford AI Model Helps Locate Racist Deeds In Santa Clara County

Oct 21

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

Media Mention
Bloomberg Law
Media Mention
AI Seeks Out Racist Language in Property Deeds for Termination
Oct 17

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

Bloomberg Law

AI Seeks Out Racist Language in Property Deeds for Termination

Oct 17

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

Media Mention
TIME
Media Mention
I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks
Oct 13

Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.

TIME

I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks

Oct 13

Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.

Media Mention
Forbes
Media Mention
The 12 Greatest Dangers Of AI
Oct 09

AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate. 

Forbes

The 12 Greatest Dangers Of AI

Oct 09

AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate. 

Media Mention
Forbes
Media Mention
OpenAI Fast-Tracks AI Agents. How Do We Balance Benefits With Risks?
Oct 04

Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."

Forbes

OpenAI Fast-Tracks AI Agents. How Do We Balance Benefits With Risks?

Oct 04

Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."

Media Mention
CNN
Media Mention
On GPS: The Birth Of Modern Artificial Intelligence
Sep 01

Fareed Zakaria speaks with “Godmother of AI” Fei-Fei Li about her journey as a computer scientist and how it influenced the discovery of modern AI.

CNN

On GPS: The Birth Of Modern Artificial Intelligence

Sep 01

Fareed Zakaria speaks with “Godmother of AI” Fei-Fei Li about her journey as a computer scientist and how it influenced the discovery of modern AI.

Media Mention
Forbes
Media Mention
How AI Can Affect Intellectual Property And What It Means For Leaders
Aug 20

This article cites the Stanford HAI AI Index's data relating to copyright infringement in creative works having to do with AI models.

Forbes

How AI Can Affect Intellectual Property And What It Means For Leaders

Aug 20

This article cites the Stanford HAI AI Index's data relating to copyright infringement in creative works having to do with AI models.

Media Mention
The Economic Times
Media Mention
Real AI Threats Are Disinformation, Bias, And Lack Of Transparency: Stanford’s James Landay
Jul 30

James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimi­nation and job displacement; of which not a lot has happened yet, are the real harms of AI. 

The Economic Times

Real AI Threats Are Disinformation, Bias, And Lack Of Transparency: Stanford’s James Landay

Jul 30

James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimi­nation and job displacement; of which not a lot has happened yet, are the real harms of AI. 

Media Mention
San Francisco Examiner
Media Mention
Wiener’s Bill To Avert Catastrophic AI Harms Draws Plenty Of Fire
Jul 30

HAI Co-Director James Landay comments on Senate Bill 1047, which concerns safely testing AI models before rolling out to the public.

San Francisco Examiner

Wiener’s Bill To Avert Catastrophic AI Harms Draws Plenty Of Fire

Jul 30

HAI Co-Director James Landay comments on Senate Bill 1047, which concerns safely testing AI models before rolling out to the public.

Media Mention
Tech Times
Media Mention
Beyond Algorithms: The Human Faces Driving Machine Learning Forward
Jul 25

HAI Co-Director Fei-Fei Li is recognized for her commitment to ethical AI and interdisciplinary research, continuing to shape the future of AI development and application.

Tech Times

Beyond Algorithms: The Human Faces Driving Machine Learning Forward

Jul 25

HAI Co-Director Fei-Fei Li is recognized for her commitment to ethical AI and interdisciplinary research, continuing to shape the future of AI development and application.

Media Mention
Forbes
Media Mention
Despite Fears, AI Continues Integrating Into Industries
Jul 23

This article gives an overview to the sectors and business processes that have been affected by AI tool adoption including healthcare, retail, hiring, and education, citing the AI Index's data report on a surge in fundraising for generative AI companies since 2022.

Forbes

Despite Fears, AI Continues Integrating Into Industries

Jul 23

This article gives an overview to the sectors and business processes that have been affected by AI tool adoption including healthcare, retail, hiring, and education, citing the AI Index's data report on a surge in fundraising for generative AI companies since 2022.

Media Mention
WIRED
Media Mention
Meta’s New Llama 3.1 AI Model Is Free, Powerful, And Risky
Jul 23

With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.

WIRED

Meta’s New Llama 3.1 AI Model Is Free, Powerful, And Risky

Jul 23

With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.

Media Mention
MIT Technology Review
Media Mention
AI Companies Promised To Self-Regulate One Year Ago. What’s Changed?
Jul 22

CRFM Society Lead Rishi Bommasani comments on the lack of clarity on what has changed in the year since major AI companies adopted the White House's set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.

MIT Technology Review

AI Companies Promised To Self-Regulate One Year Ago. What’s Changed?

Jul 22

CRFM Society Lead Rishi Bommasani comments on the lack of clarity on what has changed in the year since major AI companies adopted the White House's set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.

Media Mention
Bloomberg Law
Media Mention
AI Boot Camps Offer to Help Congress Navigate New Technology
Jul 19

This article discusses the Stanford HAI Congressional Boot Camp on AI and the strategic positioning of universities to help lawmakers understand the power of AI and set guardrails for AI's wide reach.

Bloomberg Law

AI Boot Camps Offer to Help Congress Navigate New Technology

Jul 19

This article discusses the Stanford HAI Congressional Boot Camp on AI and the strategic positioning of universities to help lawmakers understand the power of AI and set guardrails for AI's wide reach.

Media Mention
Forbes
Media Mention
What AI Is The Best? Chatbot Arena Relies On Millions Of Human Votes
Jul 18

Vanessa Parli, HAI Director of Research Programs, explains the importance of evaluation methods when it comes to AI benchmarking, noting the significance of assessing traits like "bias, toxicity, truthfulness, and other responsibility aspects."

Forbes

What AI Is The Best? Chatbot Arena Relies On Millions Of Human Votes

Jul 18

Vanessa Parli, HAI Director of Research Programs, explains the importance of evaluation methods when it comes to AI benchmarking, noting the significance of assessing traits like "bias, toxicity, truthfulness, and other responsibility aspects."

Media Mention
3
4
5
6
7