Skip to main content Skip to secondary navigation
Page Content

HAI Weekly Seminar with Juan Banda

Co-Hosted by SAGE (Stanford Aging and Ethnogeriatrics Research Center)

Event Details

Wednesday, February 2, 2022
10:00 a.m. - 11:00 a.m. PST

Event Type

Location

Virtual

Contact

Kaci Peel

Are Phenotyping Algorithms Fair for Underrepresented Minorities within Older Adults?

The widespread adoption of machine learning (ML) algorithms for risk-stratification has unearthed plenty of cases of racial/ethnic biases within algorithms. When built without careful weightage and bias-proofing, ML algorithms can give wrong recommendations, thereby worsening health disparities faced by communities of color. Biases within electronic phenotyping algorithms are largely unexplored. In this work, Juan Banda looks at probabilistic phenotyping algorithms for clinical conditions common in vulnerable older adults: dementia, frailty, mild cognitive impairment, Alzheimer’s disease, and Parkinson’s disease. Banda created an experimental framework to explore racial/ethnic biases within a single healthcare system, Stanford Health Care, to fully evaluate the performance of such algorithms under different ethnicity distributions, allowing us to identify which algorithms may be biased and under what conditions. Banda demonstrates that these algorithms have performance (precision, recall, accuracy) variations anywhere between 3 to 30% across ethnic populations; even when not using ethnicity as an input variable. In over 1,200 model evaluations, Banda has identified patterns that indicate which phenotype algorithms are more susceptible to exhibiting bias for certain ethnic groups. Lastly, Banda presents recommendations for how to discover and potentially fix these biases in the context of the five phenotypes selected for this assessment.

Juan banda

Juan Banda

Affiliate, Primary Care and Population Health