HAI affiliate faculty member Chelsea Finn discusses the valid uses for tools like ChatGPT, the concerns when they’re used for nefarious purposes, and the ways to spot AI text.
A new study with HAI Associate Director Daniel E. Ho documents systemic discrimination in how the IRS selects taxpayers to be audited, with implications for a debate on the agency’s funding.
While OpenAI has captured the public’s imagination with ChatGPT, ultimately the technology may not change the balance of power among the tech giants. Center for Research on Foundation Models Director Percy Liang calls for more transparency.
CRFM’s Percy Liang explains foundation models, key findings of benchmarking project HELM, and gaps between public and private models on this episode of the podcast The Data Exchange.
Erik Brynjolfsson, Stanford Digital Economy Lab faculty director, says ChatGPT “will get rid of a lot of routine, rote type of work and at the same time people using it may be able to do more creative work.”
Expect a rush of new AI tools in 2023, but likely ones that will hit the market without much thought to the business models or societal impact, says HAI Associate Director Russ Altman.
In this article, Axios reports on an HAI event for journalists which focused in part on the critical role humans play in the development and deployment of AI.
AI’s automatic writing tools are making it easier for students to cheat. Associate Director Rob Reich says tech companies and AI developers must agree to self-regulate.
HAI’s Center for Research on Foundation Models launches Holistic Evaluation of Language Models (HELM), the first benchmarking project aimed at improving the transparency of language models and the broader category of foundation models.