Stanford Ethicists Developing Guidelines for the Safe Inclusion of Pediatric Data in AI-Driven Medical Research
Ethical frameworks in medicine go at least as far back to the Hippocratic Oath in 400 BCE. With artificial intelligence (AI) now rapidly accelerating in health care settings — attested by the 500-plus AI devices approved by the Food and Drug Administration, mostly just in the past two years — novel frameworks are needed to ensure appropriate use of this new modality.
To that end, the international SPIRIT-AI and CONSORT-AI initiative has recently established guidelines for AI and machine learning in medical research. These frameworks, however, have not outlined specific considerations for pediatric populations. Children present uniquely complex data quandaries for AI, especially regarding consent and equity.
To address this gap, Stanford University’s Vijaytha Murali and Alyssa Burgart have led a perspective policy piece with Stanford biomedical data science instructor Roxana Daneshjou and professor of health policy Sherri Rose in the journal Nature Digital Medicine. Murali is a postdoctoral research affiliate in dermatology at Stanford University School of Medicine; Burgart is a clinical associate professor in anesthesiology, with a joint appointment in the Stanford Center for Biomedical Ethics, and the medical director of ethics for Lucile Packard Children's Hospital.
Murali, Burgart, and colleagues propose a new framework called ACCEPT-AI. In this interview, Murali and Burgart discuss the motivation behind ACCEPT-AI and how it can help ethically advance AI medical research involving pediatric patients.
Read the full study, Recommendations for the Use of Pediatric Data in Artificial Intelligence and Machine Learning ACCEPT-AI
Why is a framework like ACCEPT-AI needed?
Burgart: Over my career, I've watched AI go from something we see in science fiction movies to now exploding in the public spheres that impact our health. At this point, we need to figure out how do we do good work, how do we do ethical work, and how do we get ahead of these new technologies rather than waiting for there to be disasters.
Murali: Although the SPIRIT-AI and CONSORT-AI protocols have formed a strong foundation for ethical AI medical research, one particular area where we still need specific guidance is around children as a special population. The goal with ACCEPT-AI is to guide researchers, clinicians, regulators, and policymakers throughout each stage of the AI life cycle, from problem
selection to data collection, through outcome definition, algorithm development, and post-deployment considerations to safe utilization of pediatric data.
Burgart: When they use or make datasets that include pediatric patient information, AI researchers aren't necessarily aware of the special protections that should be considered. With ACCEPT-AI, we are hoping to provide a way for AI researchers to do their best work.
What is a particular pitfall of pediatric research that could be perpetuated or even exacerbated by AI?
Burgart: Consent. For children, they have a parent who's going to be officially providing consent by signing some documentation. But as these kids get older, how do we treat their data? The reality is, if you're putting a child's data into these algorithms, we need to have a good understanding of what is going to happen with that data if the child is older and wants the data removed. We need to really think through not only that consent right now but what does that consent look like moving forward. Overall, we want to be able to include children in research in developmentally appropriate ways that respect their dignity as human beings.
Murali: There's a practicality concern here as well because once data does go into an algorithm, it's difficult to retrieve. There is also a lot of difference between how data is handled around the world. The European Union's General Data Protection Regulation allows people to take their data out retrospectively, but we currently don't have a concrete legal mechanism to do that in the U.S. For pediatric populations now whose data could be part of AI algorithms used across many countries for many years down the road, long-term consent becomes a hard problem.
What is distinctive about pediatric health care data compared with adult data, and why are these distinctions important from an AI perspective?
Murali: We want to have safety mechanisms in place so we don't erroneously generalize adult data into the pediatric population, or vice versa. Compared with adults, children — who of course can be anywhere from zero to 18 years of age — have much broader ranges of size, development, and other anatomical and physiological variables.
Burgart: If there's mixing of the two data types, then AI algorithms may make inappropriate generalizations, and that's a really big safety point. Currently, there are no guard rails to differentiate the two data types.
Murali: This lack of differentiation leads to the concern that AI algorithms might perform better on adult data than on pediatric data because the algorithms have been trained mostly or even solely with adult data, which is much more readily available.
A big component of ACCEPT-AI is discussing what we term "age-related algorithmic bias." If AI researchers do not differentiate or make clear from the outset the data types and the age of the patient population that the data is extracted from when developing an algorithm, then the algorithm is going to produce results that may not be favorable for the pediatric population. We hope to address this issue with age transparency, called for by ACCEPT-AI, so researchers clearly know what's going into and coming out of an algorithm.
Burgart: Bias within research datasets is of course not new to AI. There are so many medical recommendations that we make that are really based on the default 70-kilogram white man. We know that bias has been infused into decision making and clinical rationale in the past, so we're hoping if we continue to develop and implement frameworks like ACCEPT-AI that we can have better quality and safer data that's going to benefit patients more significantly.
What are equity concerns that ACCEPT-AI helps to address?
Murali: Equity is a theme throughout our Nature Digital Medicine journal perspective. In developing AI algorithms, the starting point is that it's important to involve children in the technological innovation these algorithms represent, which we believe has the potential to transform health care.
On a practical level, that means we need to accommodate diverse groups with representation across age ranges. We also need to involve underrepresented research groups, such as ethnic minorities. We need to include diverse socioeconomic groups as well and not just in the same country. There's a lot of AI research going on in higher-income countries but very little going on, or at least that is sustained, in lower-income countries. These challenges become even more difficult when extended to pediatric populations that are harder to reach and include in medical research.
What are the next steps for ACCEPT-AI and other AI ethical frameworks?
Murali: We've been fortunate to receive a lot of collaborative interest. We're in contact with several groups in the United Kingdom, including those that developed the SPIRIT- and CONSORT-AI guidelines. Another is the World Health Organization International Telecommunication Union Working Group, a focus group for AI for health where we have recently incorporated ACCEPT-AI into an upcoming WHO policy guidance. We are also putting out a call to the broader community for anybody who's interested in participating in moving to the next steps, where we're looking to develop a consensus statement among multiple stakeholders and formalize the recommendations. We’d be glad to welcome anyone from the Stanford community to reach out to us.
Pediatrics fits into a broader theme of the work that we're doing in developing AI research guidance for special populations. We also have projects in mind for rare disease groups, maternal health, and elderly patients.
Burgart: Overall in the health care space now, we're seeing a lot of the "move fast and break things" attitude from the tech world coming into AI. It's really important for us as health care leaders to ask how we can do this and not break our patients. We're making progress on that front with these ethical frameworks.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.