HAI Weekly Seminar with Juan Banda
Are Phenotyping Algorithms Fair for Underrepresented Minorities within Older Adults?
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Are Phenotyping Algorithms Fair for Underrepresented Minorities within Older Adults?
The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.
.png&w=1920&q=100)
The possibility that AI will automate most cognitive labor is worth taking seriously. How should we adapt to this transformation? I start from the perspective, articulated in the essay “AI as normal technology”, that the true bottlenecks lie downstream of capabilities and that AI’s impacts will unfold gradually over decades. If this is true, there are major gaps in our current evidence infrastructure, because it over-emphasizes the capability layer.
The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.
Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.
The widespread adoption of machine learning (ML) algorithms for risk-stratification has unearthed plenty of cases of racial/ethnic biases within algorithms. When built without careful weightage and bias-proofing, ML algorithms can give wrong recommendations, thereby worsening health disparities faced by communities of color. Biases within electronic phenotyping algorithms are largely unexplored. In this work, Juan Banda looks at probabilistic phenotyping algorithms for clinical conditions common in vulnerable older adults: dementia, frailty, mild cognitive impairment, Alzheimer’s disease, and Parkinson’s disease. Banda created an experimental framework to explore racial/ethnic biases within a single healthcare system, Stanford Health Care, to fully evaluate the performance of such algorithms under different ethnicity distributions, allowing us to identify which algorithms may be biased and under what conditions. Banda demonstrates that these algorithms have performance (precision, recall, accuracy) variations anywhere between 3 to 30% across ethnic populations; even when not using ethnicity as an input variable. In over 1,200 model evaluations, Banda has identified patterns that indicate which phenotype algorithms are more susceptible to exhibiting bias for certain ethnic groups. Lastly, Banda presents recommendations for how to discover and potentially fix these biases in the context of the five phenotypes selected for this assessment.