Skip to main content Skip to secondary navigation
Page Content

HAI Policy Briefs

September 2022

Promoting Algorithmic Fairness in Clinical Risk Prediction

While machine learning applications in healthcare continue to shape patient-care experiences and medical outcomes, discriminatory AI decision-making is concerning.  This issue is especially pronounced in a clinical setting, where individuals’ well-being and physical safety are on the line, and where medical professionals face life-or-death decisions every day. Until now, the conversation about measuring algorithmic fairness in healthcare has focused on fairness itself—and has not fully taken into account how fairness techniques could impact clinical predictive models, which are often derived from large clinical datasets. This brief seeks to ground this debate in evidence, and suggests the best way forward in developing fairer ML tools for a clinical setting.

Key Takeaways

Policy Brief September 2022

➜ We studied the trade-offs clinical predictive algorithms face between accuracy and fairness for outcomes like hospital mortality, prolonged stays in the hospital, and 30-day readmissions to the hospital. We found that techniques that make these programs more fair can degrade performance of the algorithm for everyone across the board.

➜ Making algorithmic fixes on the developer’s side should only be one option to fix this. Policymakers should consider ways to incentivize model developers to engage in participatory design practices that incorporate perspectives from patient advocacy groups and civil society organizations.

➜ Algorithmic fixes may work in some contexts, but others may require policymakers to mandate that a human stays in the decision-making loop or the use of the algorithm may not be worthwhile at all.

Read the full brief    View all Policy Briefs

Authors

Contact Us

Email HAI-Policy@stanford.edu for general inquiries about upcoming training programs, briefings, or other policy related work.