Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News

Stanford researchers show that although top language models cannot yet accurately diagnose children’s speech disorders, fine-tuning and other approaches could well change the game.
Stanford researchers show that although top language models cannot yet accurately diagnose children’s speech disorders, fine-tuning and other approaches could well change the game.

Foundation models are transforming artificial intelligence (AI) in healthcare by providing modular components adaptable for various downstream tasks, making AI development more scalable and cost-effective. Foundation models for structured electronic health records (EHR), trained on coded medical records from millions of patients, demonstrated benefits including increased performance with fewer training labels, and improved robustness to distribution shifts. However, questions remain on the feasibility of sharing these models across hospitals and their performance in local tasks. This multi-center study examined the adaptability of a publicly accessible structured EHR foundation model (FMSM), trained on 2.57 M patient records from Stanford Medicine. Experiments used EHR data from The Hospital for Sick Children (SickKids) and Medical Information Mart for Intensive Care (MIMIC-IV). We assessed both adaptability via continued pretraining on local data, and task adaptability compared to baselines of locally training models from scratch, including a local foundation model. Evaluations on 8 clinical prediction tasks showed that adapting the off-the-shelf FMSMmatched the performance of gradient boosting machines (GBM) locally trained on all data while providing a 13% improvement in settings with few task-specific training labels. Continued pretraining on local data showed FMSM required fewer than 1% of training examples to match the fully trained GBM’s performance, and was 60 to 90% more sample-efficient than training local foundation models from scratch. Our findings demonstrate that adapting EHR foundation models across hospitals provides improved prediction performance at less cost, underscoring the utility of base foundation models as modular components to streamline the development of healthcare AI.
Foundation models are transforming artificial intelligence (AI) in healthcare by providing modular components adaptable for various downstream tasks, making AI development more scalable and cost-effective. Foundation models for structured electronic health records (EHR), trained on coded medical records from millions of patients, demonstrated benefits including increased performance with fewer training labels, and improved robustness to distribution shifts. However, questions remain on the feasibility of sharing these models across hospitals and their performance in local tasks. This multi-center study examined the adaptability of a publicly accessible structured EHR foundation model (FMSM), trained on 2.57 M patient records from Stanford Medicine. Experiments used EHR data from The Hospital for Sick Children (SickKids) and Medical Information Mart for Intensive Care (MIMIC-IV). We assessed both adaptability via continued pretraining on local data, and task adaptability compared to baselines of locally training models from scratch, including a local foundation model. Evaluations on 8 clinical prediction tasks showed that adapting the off-the-shelf FMSMmatched the performance of gradient boosting machines (GBM) locally trained on all data while providing a 13% improvement in settings with few task-specific training labels. Continued pretraining on local data showed FMSM required fewer than 1% of training examples to match the fully trained GBM’s performance, and was 60 to 90% more sample-efficient than training local foundation models from scratch. Our findings demonstrate that adapting EHR foundation models across hospitals provides improved prediction performance at less cost, underscoring the utility of base foundation models as modular components to streamline the development of healthcare AI.

This brief explores the legal liability risks of healthcare AI tools by analyzing the challenges courts face in dealing with patient injury caused by defects in AI or software systems.
This brief explores the legal liability risks of healthcare AI tools by analyzing the challenges courts face in dealing with patient injury caused by defects in AI or software systems.


Teams across Stanford Health Care’s Technology organization came together to build “ChatEHR”, a privacy preserving and practical GenAI tool that could serve as a model for other health systems
Teams across Stanford Health Care’s Technology organization came together to build “ChatEHR”, a privacy preserving and practical GenAI tool that could serve as a model for other health systems

Artificial intelligence (AI) is transforming the medical imaging of adult patients. However, its utilization in pediatric oncology imaging remains constrained, in part due to the inherent scarcity of data associated with childhood cancers. Pediatric cancers are rare, and imaging technologies are evolving rapidly, leading to insufficient data of a particular type to effectively train these algorithms. The small market size of pediatric patients compared with adult patients could also contribute to this challenge, as market size is a driver of commercialization. This review provides an overview of the current state of AI applications for pediatric cancer imaging, including applications for medical image acquisition, processing, reconstruction, segmentation, diagnosis, staging, and treatment response monitoring. Although current developments are promising, impediments due to the diverse anatomies of growing children and nonstandardized imaging protocols have led to limited clinical translation thus far. Opportunities include leveraging reconstruction algorithms to achieve accelerated low-dose imaging and automating the generation of metric-based staging and treatment monitoring scores. Transfer learning of adult-based AI models to pediatric cancers, multiinstitutional data sharing, and ethical data privacy practices for pediatric patients with rare cancers will be keys to unlocking the full potential of AI for clinical translation and improving outcomes for these young patients.
Artificial intelligence (AI) is transforming the medical imaging of adult patients. However, its utilization in pediatric oncology imaging remains constrained, in part due to the inherent scarcity of data associated with childhood cancers. Pediatric cancers are rare, and imaging technologies are evolving rapidly, leading to insufficient data of a particular type to effectively train these algorithms. The small market size of pediatric patients compared with adult patients could also contribute to this challenge, as market size is a driver of commercialization. This review provides an overview of the current state of AI applications for pediatric cancer imaging, including applications for medical image acquisition, processing, reconstruction, segmentation, diagnosis, staging, and treatment response monitoring. Although current developments are promising, impediments due to the diverse anatomies of growing children and nonstandardized imaging protocols have led to limited clinical translation thus far. Opportunities include leveraging reconstruction algorithms to achieve accelerated low-dose imaging and automating the generation of metric-based staging and treatment monitoring scores. Transfer learning of adult-based AI models to pediatric cancers, multiinstitutional data sharing, and ethical data privacy practices for pediatric patients with rare cancers will be keys to unlocking the full potential of AI for clinical translation and improving outcomes for these young patients.

This brief urges policymakers to realign the healthcare market’s incentives in favor of patients, recommending interventions that shape companies’ incentives around the pricing models they deploy.
This brief urges policymakers to realign the healthcare market’s incentives in favor of patients, recommending interventions that shape companies’ incentives around the pricing models they deploy.
