Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Managing Risks in AI-Powered Biomedical Research | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Managing Risks in AI-Powered Biomedical Research

Date
February 24, 2025
Topics
Healthcare

How researchers are working to ensure AI accelerates medical breakthroughs without unintended harm.

Artem Trotsyuk doesn’t want to frighten anybody, but he does want people, specifically those using artificial intelligence in biomedical research, to sit up and pay attention. 

Trotsyuk, a fellow with the Stanford Center for Biomedical Ethics, says the breakneck speed of advances in artificial intelligence offers incredible potential to accelerate drug discovery, improve disease diagnosis, and create more personalized treatments, but with these great opportunities come great risks, some of which we still don’t fully understand. 

“Our intent isn’t to be overly sensationalized or use scare tactics about worst-case scenarios, but there are very real issues,” he said.

Some of those very real issues include the potential that biomedical research that leverages AI could be used in a way that would supercharge violations of personal privacy and security, exacerbate racial biases, or, most frightening of all, lead to the creation of new types of bioweapons.

A consortium of scientists, ethicists, and researchers at Stanford is now producing a series of papers focused on the best ways to manage those risks. Most recently, the group published a paper in Nature Machine Intelligence addressing the need for an ethical framework to help guide biomedical researchers using AI so that they account for and protect against unintended negative consequences of their work.

The work stems from a need to keep pace with rapidly advancing technology, which pushes into new ethical territory faster than institutions can create protective guardrails and regulations, said David Magnus, a co-lead author of the paper and the Thomas A. Raffin Professor in Medicine and Biomedical Ethics at Stanford Medicine. Addressing the issue will require that researchers, industry, and regulatory decision makers work together to create a virtuous research enterprise, Magnus said.

“This whole project emerged out of our experience with the [AI ethics review board process] where researchers were asking for help in figuring out how to mitigate the potential for misuse of their technology, but had no idea of what to do,” he said. “This project, funded partly by the Stanford HAI health policy group, is our initial answer and something we can now provide to HAI-funded projects that raise the potential for downstream misuse. Hopefully this effort will be a model for expansion into new domains.”

A Call to Action

The potential for misuse of this technology was emphasized in a 2022 paper in which researchers showed how they could, in six hours, use AI to invert drug discovery meant for good to create more than 40,000 toxic molecules. In response, two Stanford chemists wrote a letter published in Nature Machine Intelligence with a call to action to prevent this kind of misuse. 

Trotsyuk, with scientists, policymakers, ethicists, and researchers from the Stanford Center for Biomedical Ethics, HAI, the Hoover Institution, Stanford Law School, Stanford School of Medicine, Harvard Medical School, the Cleveland Clinic, and more, began work toward this goal. 

“AI development is moving rapidly,” Trotsyuk said. “At its heart our paper is meant to show potential reasonable scenarios of how misuse can happen and offer [a framework] of how to prevent that.”

Identifying the Risk

Trotsyuk and the other researchers working on this issue built on existing guidelines but also attempted to identify the unique risks related to biomedical research that leverages AI. 

They noted that to protect against these risks, one must first identify them. The group determined three example areas where misuse poses pressing issues. 

  • AI in drug and chemical discovery: AI is used to develop therapeutic medicine, but a bad actor turns around and uses it to create toxic agents or bioweapons.

  • AI used to create synthetic data: Researchers use AI to create synthetic datasets, which can, in turn, improve the ability of scientists to do biomedical research in populations and protect individuals’ privacy. However, that same synthetic data could lead to fake or misleading results that can do great harm. 

  • Ambient intelligence: AI-powered passive data collection meant to help monitor patients’ health could be used by a bad actor or government to surveil them or violate their privacy.

To create an ethical framework to deal with these risks, the team looked at existing guidelines and regulations for responsible biomedical research that leverages AI, such as those from the World Health Organization, the OECD AI Policy Observatory, and the National Institute of Standards and Technology. They also considered current approaches to mitigating risks, including using simulated adversarial testing, or red teaming.

Noticing a gap in the above guidelines, the team moved to develop a solution. They recommend that researchers embed protective measures in their AI models to prevent misuse. These might include restricted access, audits of the foundational data to root out bias, and a level of transparency within the models to understand how they work and the data upon which they are built. They also recommend that this be a continuous process and that researchers reach out to other stakeholders as they conduct their work. 

Finally, the team suggests protocols that could stop a specific endeavor or actor when the potential risks outweigh the benefits.

Moving Forward 

Trotsyuk’s hope is that this framework will be a starting point not just for researchers but also for policymakers. 

He said officials can translate these recommendations and the group’s future work into actionable policy, allowing progress to continue while mitigating misuse.

“I don’t know what the exact policies will look like,” Trotsyuk said. “But it’s good to see that more people are now thinking about [the issue of misuse].”

Find the paper, “Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research,” in the Nov. 26, 2024, issue of Nature Machine Intelligence.

Share
Link copied to clipboard!
Contributor(s)
Scott Hadly

Related News

AI Reveals How Brain Activity Unfolds Over Time
Andrew Myers
Jan 21, 2026
News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

AI Reveals How Brain Activity Unfolds Over Time

Andrew Myers
HealthcareSciences (Social, Health, Biological, Physical)Jan 21

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

Why 'Zero-Shot' Clinical Predictions Are Risky
Suhana Bedi, Jason Alan Fries, and Nigam H. Shah
Jan 07, 2026
News
Doctor reviews a tablet in the foreground while other doctors and nurses stand over a medical bed in the background

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.

News
Doctor reviews a tablet in the foreground while other doctors and nurses stand over a medical bed in the background

Why 'Zero-Shot' Clinical Predictions Are Risky

Suhana Bedi, Jason Alan Fries, and Nigam H. Shah
HealthcareFoundation ModelsJan 07

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.

Stanford Researchers: AI Reality Check Imminent
Forbes
Dec 23, 2025
Media Mention

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.

Media Mention
Your browser does not support the video tag.

Stanford Researchers: AI Reality Check Imminent

Forbes
Generative AIEconomy, MarketsHealthcareCommunications, MediaDec 23

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.