
Artem Trotsyuk doesn’t want to frighten anybody, but he does want people, specifically those using artificial intelligence in biomedical research, to sit up and pay attention.
Trotsyuk, a fellow with the Stanford Center for Biomedical Ethics, says the breakneck speed of advances in artificial intelligence offers incredible potential to accelerate drug discovery, improve disease diagnosis, and create more personalized treatments, but with these great opportunities come great risks, some of which we still don’t fully understand.
“Our intent isn’t to be overly sensationalized or use scare tactics about worst-case scenarios, but there are very real issues,” he said.
Some of those very real issues include the potential that biomedical research that leverages AI could be used in a way that would supercharge violations of personal privacy and security, exacerbate racial biases, or, most frightening of all, lead to the creation of new types of bioweapons.
A consortium of scientists, ethicists, and researchers at Stanford is now producing a series of papers focused on the best ways to manage those risks. Most recently, the group published a paper in Nature Machine Intelligence addressing the need for an ethical framework to help guide biomedical researchers using AI so that they account for and protect against unintended negative consequences of their work.
The work stems from a need to keep pace with rapidly advancing technology, which pushes into new ethical territory faster than institutions can create protective guardrails and regulations, said David Magnus, a co-lead author of the paper and the Thomas A. Raffin Professor in Medicine and Biomedical Ethics at Stanford Medicine. Addressing the issue will require that researchers, industry, and regulatory decision makers work together to create a virtuous research enterprise, Magnus said.
“This whole project emerged out of our experience with the [AI ethics review board process] where researchers were asking for help in figuring out how to mitigate the potential for misuse of their technology, but had no idea of what to do,” he said. “This project, funded partly by the Stanford HAI health policy group, is our initial answer and something we can now provide to HAI-funded projects that raise the potential for downstream misuse. Hopefully this effort will be a model for expansion into new domains.”
A Call to Action
The potential for misuse of this technology was emphasized in a 2022 paper in which researchers showed how they could, in six hours, use AI to invert drug discovery meant for good to create more than 40,000 toxic molecules. In response, two Stanford chemists wrote a letter published in Nature Machine Intelligence with a call to action to prevent this kind of misuse.
Trotsyuk, with scientists, policymakers, ethicists, and researchers from the Stanford Center for Biomedical Ethics, HAI, the Hoover Institution, Stanford Law School, Stanford School of Medicine, Harvard Medical School, the Cleveland Clinic, and more, began work toward this goal.
“AI development is moving rapidly,” Trotsyuk said. “At its heart our paper is meant to show potential reasonable scenarios of how misuse can happen and offer [a framework] of how to prevent that.”
Identifying the Risk
Trotsyuk and the other researchers working on this issue built on existing guidelines but also attempted to identify the unique risks related to biomedical research that leverages AI.
They noted that to protect against these risks, one must first identify them. The group determined three example areas where misuse poses pressing issues.
- AI in drug and chemical discovery: AI is used to develop therapeutic medicine, but a bad actor turns around and uses it to create toxic agents or bioweapons.
- AI used to create synthetic data: Researchers use AI to create synthetic datasets, which can, in turn, improve the ability of scientists to do biomedical research in populations and protect individuals’ privacy. However, that same synthetic data could lead to fake or misleading results that can do great harm.
- Ambient intelligence: AI-powered passive data collection meant to help monitor patients’ health could be used by a bad actor or government to surveil them or violate their privacy.
To create an ethical framework to deal with these risks, the team looked at existing guidelines and regulations for responsible biomedical research that leverages AI, such as those from the World Health Organization, the OECD AI Policy Observatory, and the National Institute of Standards and Technology. They also considered current approaches to mitigating risks, including using simulated adversarial testing, or red teaming.
Noticing a gap in the above guidelines, the team moved to develop a solution. They recommend that researchers embed protective measures in their AI models to prevent misuse. These might include restricted access, audits of the foundational data to root out bias, and a level of transparency within the models to understand how they work and the data upon which they are built. They also recommend that this be a continuous process and that researchers reach out to other stakeholders as they conduct their work.
Finally, the team suggests protocols that could stop a specific endeavor or actor when the potential risks outweigh the benefits.
Moving Forward
Trotsyuk’s hope is that this framework will be a starting point not just for researchers but also for policymakers.
He said officials can translate these recommendations and the group’s future work into actionable policy, allowing progress to continue while mitigating misuse.
“I don’t know what the exact policies will look like,” Trotsyuk said. “But it’s good to see that more people are now thinking about [the issue of misuse].”
Find the paper, “Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research,” in the Nov. 26, 2024, issue of Nature Machine Intelligence.