Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
We enable top minds in AI to study, guide, and develop human-centered AI designed to collaborate with and augment human capabilities.
In January 2026, Utah announced a first-of-its kind pilot program allowing an autonomous artificial intelligence (AI) agent to renew prescriptions for consumers who request it. The state agreed not to enforce its unprofessional conduct laws against the developer, Doctronic, if the company adheres to a contract that includes safety and privacy protections. The pilot program includes 192 drugs for chronic conditions. Although physicians will initially validate the AI’s actions, the pilot program will swiftly become one of the first deployments at scale of an autonomous, agentic system in medicine. The announcement prompted concern from associations of physicians and pharmacists who opined that AI “should NOT be making care decisions.”
In January 2026, Utah announced a first-of-its kind pilot program allowing an autonomous artificial intelligence (AI) agent to renew prescriptions for consumers who request it. The state agreed not to enforce its unprofessional conduct laws against the developer, Doctronic, if the company adheres to a contract that includes safety and privacy protections. The pilot program includes 192 drugs for chronic conditions. Although physicians will initially validate the AI’s actions, the pilot program will swiftly become one of the first deployments at scale of an autonomous, agentic system in medicine. The announcement prompted concern from associations of physicians and pharmacists who opined that AI “should NOT be making care decisions.”
Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.
Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.
This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.
This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.

The Institute aims to appoint and support promising researchers through its fellowship programs

New in this year’s report are in-depth analyses of the evolving landscape of AI hardware, novel estimates of inference costs, and new analyses of AI publication and patenting trends. We also introduce fresh data on corporate adoption of responsible AI practices, along with expanded coverage of AI’s growing role in science and medicine.
Learn more about our Faculty Affiliate program. Stanford faculty are encouraged to participate.
View research opportunities across HAI's programs, centers, labs, and initiatives.

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

QuantiPhy is a new benchmark and training framework that evaluates whether AI can numerically reason about physical properties in video images. QuantiPhy reveals that today’s models struggle with basic estimates of size, speed, and distance but offers a way forward.
Stanford, ETH Zurich, and EPFL will develop open-source foundation models that prioritize societal values over commercial interests, strengthening academia's role in shaping AI's future.
This year, affinity group topics include accessibility for individuals with disabilities, artistic creation, education, healthcare, journalism, workforce productivity, and more.

A cross-disciplinary group of Stanford students explores fresh approaches to human-centered AI.
Stanford HAI and the Wu Tsai Neurosciences Institute jointly seek proposals that transform our understanding of the human brain using AI and advance the development of intelligent technology.
The Hoffman-Yee Research Grants are designed to address significant scientific, technical, or societal challenges requiring an interdisciplinary team and a bold approach.
These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.