Skip to main content Skip to secondary navigation
Page Content

HAI In the News

Explore In the News


March 9, 2023

HAI affiliate faculty member Chelsea Finn discusses the valid uses for tools like ChatGPT, the concerns when they’re used for nefarious purposes, and the ways to spot AI text.

February 2, 2023

The hype around generative AI prompts discussion on risks, such as bad actors propagating disinformation, says HAI Co-Director Fei-Fei Li.

January 31, 2023

A new study with HAI Associate Director Daniel E. Ho documents systemic discrimination in how the IRS selects taxpayers to be audited, with implications for a debate on the agency’s funding.

January 27, 2023

While OpenAI has captured the public’s imagination with ChatGPT, ultimately the technology may not change the balance of power among the tech giants. Center for Research on Foundation Models Director Percy Liang calls for more transparency.

January 19, 2023

CRFM’s Percy Liang explains foundation models, key findings of benchmarking project HELM, and gaps between public and private models on this episode of the podcast The Data Exchange.

January 18, 2023

Erik Brynjolfsson, Stanford Digital Economy Lab faculty director, says ChatGPT “will get rid of a lot of routine, rote type of work and at the same time people using it may be able to do more creative work.”

January 14, 2023

Jennifer King, privacy and data policy fellow at Stanford HAI, talks risk of the chat tool and anticipates its proliferation despite those risks.

December 6, 2022

Expect a rush of new AI tools in 2023, but likely ones that will hit the market without much thought to the business models or societal impact, says HAI Associate Director Russ Altman.

December 2, 2022

In this article, Axios reports on an HAI event for journalists which focused in part on the critical role humans play in the development and deployment of AI.

November 28, 2022

AI’s automatic writing tools are making it easier for students to cheat. Associate Director Rob Reich says tech companies and AI developers must agree to self-regulate.

November 17, 2022

HAI’s Center for Research on Foundation Models launches Holistic Evaluation of Language Models (HELM), the first benchmarking project aimed at improving the transparency of language models and the broader category of foundation models. 

November 16, 2022

The HAI fall conference challenged attendees to think about what it means to put humans at the center of AI, rather than just “in the loop.”