Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Fei-Fei Li | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Back to Fei-Fei Li

All Related

Representation Learning with Statistical Independence to Mitigate Bias
Ehsan Adeli, Qingyu Zhao, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Juan Carlos Niebles, Kilian Pohl
Dec 03, 2020
Research

Presence of bias (in datasets or tasks) is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in recent years. Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems. Controlling for all types of biases in the dataset curation stage is cumbersome and sometimes impossible. The alternative is to use the available data and build models incorporating fair representation learning. In this paper, we propose such a model based on adversarial training with two competing objectives to learn features that have (1) maximum discriminative power with respect to the task and (2) minimal statistical mean dependence with the protected (bias) variable(s). Our approach does so by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and the learned features. We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias). Our results show that the learned features by our method not only result in superior prediction performance but also are unbiased.

Representation Learning with Statistical Independence to Mitigate Bias

Ehsan Adeli, Qingyu Zhao, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Juan Carlos Niebles, Kilian Pohl
Dec 03, 2020

Presence of bias (in datasets or tasks) is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in recent years. Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems. Controlling for all types of biases in the dataset curation stage is cumbersome and sometimes impossible. The alternative is to use the available data and build models incorporating fair representation learning. In this paper, we propose such a model based on adversarial training with two competing objectives to learn features that have (1) maximum discriminative power with respect to the task and (2) minimal statistical mean dependence with the protected (bias) variable(s). Our approach does so by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and the learned features. We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias). Our results show that the learned features by our method not only result in superior prediction performance but also are unbiased.

Machine Learning
Research
Vision-based Estimation of MDS-UPDRS Gait Scoresfor Assessing Parkinson’s Disease Motor Severity
Mandy Lu, Kathleen Poston, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Kilian M. Pohl, Juan Carlos Niebles, Ehsan Adeli
Nov 18, 2020
Research

Parkinson’s disease (PD) is a progressive neurological disorder primarily affecting motor function resulting in tremor at rest, rigidity, bradykinesia, and postural instability. The physical severity of PD impairments can be quantified through the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS), a widely used clinical rating scale. Accurate and quantitative assessment of disease progression is critical to developing a treatment that slows or stops further advancement of the disease. Prior work has mainly focused on dopamine transport neuroimaging for diagnosis or costly and intrusive wearables evaluating motor impairments. For the first time, we propose a computer vision-based model that observes non-intrusive video recordings of individuals, extracts their 3D body skeletons, tracks them through time, and classifies the movements according to the MDS-UPDRS gait scores. Experimental results show that our proposed method performs significantly better than chance and competing methods with an F1-score of 0.83 and a balanced accuracy of 81%. This is the first benchmark for classifying PD patients based on MDS-UPDRS gait severity and could be an objective biomarker for disease severity. Our work demonstrates how computer-assisted technologies can be used to non-intrusively monitor patients and their motor impairments.

Vision-based Estimation of MDS-UPDRS Gait Scoresfor Assessing Parkinson’s Disease Motor Severity

Mandy Lu, Kathleen Poston, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Kilian M. Pohl, Juan Carlos Niebles, Ehsan Adeli
Nov 18, 2020

Parkinson’s disease (PD) is a progressive neurological disorder primarily affecting motor function resulting in tremor at rest, rigidity, bradykinesia, and postural instability. The physical severity of PD impairments can be quantified through the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS), a widely used clinical rating scale. Accurate and quantitative assessment of disease progression is critical to developing a treatment that slows or stops further advancement of the disease. Prior work has mainly focused on dopamine transport neuroimaging for diagnosis or costly and intrusive wearables evaluating motor impairments. For the first time, we propose a computer vision-based model that observes non-intrusive video recordings of individuals, extracts their 3D body skeletons, tracks them through time, and classifies the movements according to the MDS-UPDRS gait scores. Experimental results show that our proposed method performs significantly better than chance and competing methods with an F1-score of 0.83 and a balanced accuracy of 81%. This is the first benchmark for classifying PD patients based on MDS-UPDRS gait severity and could be an objective biomarker for disease severity. Our work demonstrates how computer-assisted technologies can be used to non-intrusively monitor patients and their motor impairments.

Healthcare
Research
Domain Shift and Emerging Questions in Facial Recognition Technology
Daniel E. Ho, Emily Black, Maneesh Agrawala, Fei-Fei Li
Quick ReadNov 01, 2020
policy brief

This brief urges transparent, verifiable standards for facial-recognition systems and calls for a moratorium on government use until rigorous in-domain testing frameworks are established.

Domain Shift and Emerging Questions in Facial Recognition Technology

Daniel E. Ho, Emily Black, Maneesh Agrawala, Fei-Fei Li
Quick ReadNov 01, 2020

This brief urges transparent, verifiable standards for facial-recognition systems and calls for a moratorium on government use until rigorous in-domain testing frameworks are established.

Privacy, Safety, Security
Regulation, Policy, Governance
policy brief
Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains
Daniel E. Ho, Emily Black, Maneesh Agrawala, Fei-Fei Li
Deep DiveNov 01, 2020
whitepaper

This white paper provides research- and scientifically-grounded recommendations for how to give context to calls for testing the operational accuracy of facial recognition technology.

Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains

Daniel E. Ho, Emily Black, Maneesh Agrawala, Fei-Fei Li
Deep DiveNov 01, 2020

This white paper provides research- and scientifically-grounded recommendations for how to give context to calls for testing the operational accuracy of facial recognition technology.

Computer Vision
Regulation, Policy, Governance
whitepaper
National AI Research Resource: Ensuring the Continuation of American Innovation
John Etchemendy, Fei-Fei Li
Mar 28, 2020
announcement
Your browser does not support the video tag.

National AI Research Resource: Ensuring the Continuation of American Innovation

John Etchemendy, Fei-Fei Li
Mar 28, 2020
Industry, Innovation
Government, Public Administration
Regulation, Policy, Governance
Your browser does not support the video tag.
announcement
Ideas for the 2020s
Fei-Fei Li
Feb 03, 2019
news
Your browser does not support the video tag.

Ideas for the 2020s

Fei-Fei Li
Feb 03, 2019
Economy, Markets
Education, Skills
Your browser does not support the video tag.
news
Fei-Fei Li's Quest to Make AI Better for Humanity
Fei-Fei Li
Jessi Hempel
Nov 10, 2018
news
Your browser does not support the video tag.

Artificial intelligence has a problem: The biases of its creators are getting hard-coded into its future. Fei-Fei Li has a plan to fix that—by rebooting the field she helped invent.

Fei-Fei Li's Quest to Make AI Better for Humanity

Fei-Fei Li
Jessi Hempel
Nov 10, 2018

Artificial intelligence has a problem: The biases of its creators are getting hard-coded into its future. Fei-Fei Li has a plan to fix that—by rebooting the field she helped invent.

Healthcare
Your browser does not support the video tag.
news
How to Make A.I. That’s Good for People
Fei-Fei Li
Mar 06, 2018
media mention

How to Make A.I. That’s Good for People

Mar 06, 2018
Privacy, Safety, Security
Healthcare
media mention
1
2