Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data
Self-Supervised Learning Of Brain Dynamics From Broad Neuroimaging Data
Related Publications
In January 2026, Utah announced a first-of-its kind pilot program allowing an autonomous artificial intelligence (AI) agent to renew prescriptions for consumers who request it. The state agreed not to enforce its unprofessional conduct laws against the developer, Doctronic, if the company adheres to a contract that includes safety and privacy protections. The pilot program includes 192 drugs for chronic conditions. Although physicians will initially validate the AI’s actions, the pilot program will swiftly become one of the first deployments at scale of an autonomous, agentic system in medicine. The announcement prompted concern from associations of physicians and pharmacists who opined that AI “should NOT be making care decisions.”
In January 2026, Utah announced a first-of-its kind pilot program allowing an autonomous artificial intelligence (AI) agent to renew prescriptions for consumers who request it. The state agreed not to enforce its unprofessional conduct laws against the developer, Doctronic, if the company adheres to a contract that includes safety and privacy protections. The pilot program includes 192 drugs for chronic conditions. Although physicians will initially validate the AI’s actions, the pilot program will swiftly become one of the first deployments at scale of an autonomous, agentic system in medicine. The announcement prompted concern from associations of physicians and pharmacists who opined that AI “should NOT be making care decisions.”
The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws
Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.
Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.
This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.
This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.