Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.
In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

HAI Policy Fellow Riana Pfefferkorn discusses the policy implications of the "mass digital undressing spree,” where the chatbot Grok responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X.
HAI Policy Fellow Riana Pfefferkorn discusses the policy implications of the "mass digital undressing spree,” where the chatbot Grok responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X.
Permeation of artificial intelligence (AI) tools into health care tests traditional understandings of what patients should be told about their care. Despite the general importance of informed consent, decision support tools (eg, automatic electrocardiogram readers, rule-based risk classifiers, and UpToDate summaries) are not usually discussed with patients even though they affect treatment decisions. Should AI tools be treated similarly? The legal doctrine of informed consent requires disclosing information that is material to a reasonable patient’s decision to accept a health care service, and evidence suggests that many patients would think differently about care if they knew it was guided by AI. In recent surveys, 60% of US adults said they would be uncomfortable with their physician relying on AI,1 70% to 80% had low expectations AI would improve important aspects of their care,2 only one-third trusted health care systems to use AI responsibly,3 and 63% said it was very true that they would want to be notified about use of AI in their care.
Permeation of artificial intelligence (AI) tools into health care tests traditional understandings of what patients should be told about their care. Despite the general importance of informed consent, decision support tools (eg, automatic electrocardiogram readers, rule-based risk classifiers, and UpToDate summaries) are not usually discussed with patients even though they affect treatment decisions. Should AI tools be treated similarly? The legal doctrine of informed consent requires disclosing information that is material to a reasonable patient’s decision to accept a health care service, and evidence suggests that many patients would think differently about care if they knew it was guided by AI. In recent surveys, 60% of US adults said they would be uncomfortable with their physician relying on AI,1 70% to 80% had low expectations AI would improve important aspects of their care,2 only one-third trusted health care systems to use AI responsibly,3 and 63% said it was very true that they would want to be notified about use of AI in their care.

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.
In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.

A new study shows the AI industry is withholding key information.
A new study shows the AI industry is withholding key information.
Tabular medical datasets, like electronic health records (EHRs), biobanks, and structured clinical trial data, are rich sources of information with the potential to advance precision medicine and optimize patient care. However, real-world medical datasets have limited patient diversity and cannot simulate hypothetical outcomes, both of which are necessary for equitable and effective medical research. Fueled by recent advancements in machine learning, generative models offer a promising solution to these data limitations by generating enhanced synthetic data. This review highlights the potential of conditional generative models (CGMs) to create patient-specific synthetic data for a variety of precision medicine applications. We survey CGM approaches that tackle two medical applications: correcting for data representation biases and simulating digital health twins. We additionally explore how the surveyed methods handle modeling tabular medical data and briefly discuss evaluation criteria. Finally, we summarize the technical, medical, and ethical challenges that must be addressed before CGMs can be effectively and safely deployed in the medical field.
Tabular medical datasets, like electronic health records (EHRs), biobanks, and structured clinical trial data, are rich sources of information with the potential to advance precision medicine and optimize patient care. However, real-world medical datasets have limited patient diversity and cannot simulate hypothetical outcomes, both of which are necessary for equitable and effective medical research. Fueled by recent advancements in machine learning, generative models offer a promising solution to these data limitations by generating enhanced synthetic data. This review highlights the potential of conditional generative models (CGMs) to create patient-specific synthetic data for a variety of precision medicine applications. We survey CGM approaches that tackle two medical applications: correcting for data representation biases and simulating digital health twins. We additionally explore how the surveyed methods handle modeling tabular medical data and briefly discuss evaluation criteria. Finally, we summarize the technical, medical, and ethical challenges that must be addressed before CGMs can be effectively and safely deployed in the medical field.