Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
How policymakers can best regulate AI to balance innovation with public interests and human rights.
The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.
"Countries pursue AI sovereignty with four main objectives in mind: cultural autonomy, national security, economic competitiveness, and regulatory oversight," says Juan N. Pava, Stanford HAI Research Fellow.
"Countries pursue AI sovereignty with four main objectives in mind: cultural autonomy, national security, economic competitiveness, and regulatory oversight," says Juan N. Pava, Stanford HAI Research Fellow.
In January 2026, Utah announced a first-of-its kind pilot program allowing an autonomous artificial intelligence (AI) agent to renew prescriptions for consumers who request it. The state agreed not to enforce its unprofessional conduct laws against the developer, Doctronic, if the company adheres to a contract that includes safety and privacy protections. The pilot program includes 192 drugs for chronic conditions. Although physicians will initially validate the AI’s actions, the pilot program will swiftly become one of the first deployments at scale of an autonomous, agentic system in medicine. The announcement prompted concern from associations of physicians and pharmacists who opined that AI “should NOT be making care decisions.”
In January 2026, Utah announced a first-of-its kind pilot program allowing an autonomous artificial intelligence (AI) agent to renew prescriptions for consumers who request it. The state agreed not to enforce its unprofessional conduct laws against the developer, Doctronic, if the company adheres to a contract that includes safety and privacy protections. The pilot program includes 192 drugs for chronic conditions. Although physicians will initially validate the AI’s actions, the pilot program will swiftly become one of the first deployments at scale of an autonomous, agentic system in medicine. The announcement prompted concern from associations of physicians and pharmacists who opined that AI “should NOT be making care decisions.”
We welcome proposals for research projects that tackle important challenges and opportunities in this space from either a technical or social science perspective, with findings that can generate policy insights and recommendations.
We welcome proposals for research projects that tackle important challenges and opportunities in this space from either a technical or social science perspective, with findings that can generate policy insights and recommendations.

This brief examines the privacy risks foundation models pose to individuals and society, and governance mechanisms needed to address them.

This brief examines the privacy risks foundation models pose to individuals and society, and governance mechanisms needed to address them.
Stanford HAI Executive Director Russell Wald discusses world models, saying many policymakers or government officials need to better understand the technologies and their implications.
Stanford HAI Executive Director Russell Wald discusses world models, saying many policymakers or government officials need to better understand the technologies and their implications.
Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.
Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.

This brief proposes governance mechanisms for the growing use of AI in health insurance utilization review.
This brief proposes governance mechanisms for the growing use of AI in health insurance utilization review.



"If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise."
"If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise."