Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Fei-Fei Li | Stanford HAI
peopleLeadership,Faculty,Senior Fellow

Fei-Fei Li

Denning Co-Director, Stanford HAI | Sequoia Professor of Computer Science, Stanford University

fei fei li headshot
External Bio

Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford HAI. She served as the Director of Stanford’s AI Lab from 2013 to 2018. And during her sabbatical from Stanford from January 2017 to September 2018, she was Vice President at Google and served as Chief Scientist of AI/ML at Google Cloud. Li obtained her B.A. degree in physics from Princeton in 1999 with High Honors, and her PhD degree in electrical engineering from California Institute of Technology (Caltech) in 2005. She joined Stanford in 2009 as an assistant professor. Prior to that, she was on faculty at Princeton University (2007-2009) and University of Illinois Urbana-Champaign (2005-2006).

Li’s current research interests include cognitively inspired AI, machine learning, deep learning, computer vision and AI+healthcare especially ambient intelligent systems for healthcare delivery. In the past she has also worked on cognitive and computational neuroscience. Li has published more than 200 scientific articles in top-tier journals and conferences, including Nature, PNAS, Journal of Neuroscience, CVPR, ICCV, NIPS, ECCV, ICRA, IROS, RSS, IJCV, IEEE-PAMI, New England Journal of Medicine, Nature Digital Medicine, etc. Li is the inventor of ImageNet and the ImageNet Challenge, a critical large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and AI. In addition to her technical contributions, she is a national leading voice for advocating diversity in STEM and AI. She is co-founder and chairperson of the national non-profit AI4ALL aimed at increasing inclusion and diversity in AI education.

Share
Link copied to clipboard!

Latest Related to Fei-Fei Li

Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news
abstract collage of scientists and university imagery

Universities Must Reclaim AI Research for the Public Good

John Etchemendy, James Landay, Fei-Fei Li, Christopher Manning
Oct 30

With corporate AI labs turning inward, academia must carry forward the mantle of open science.

media mention
Your browser does not support the video tag.

TIME100 AI 2025: Fei-Fei Li

TIME
Regulation, Policy, GovernanceAug 26

Stanford HAI Co-Director Fei-Fei Li is named as TIME100's 2025 most influential shapers in AI.

media mention
Your browser does not support the video tag.

Firing Line with Margaret Hoover

PBS
Regulation, Policy, GovernanceEthics, Equity, InclusionMay 23

In this video, HAI Co-Director Fei-Fei Li discusses ethical development of artificial intelligence and the challenge of establishing effective regulations. She addresses government funding of research, diversity in science, and ensuring child safety as AI advances.

All Related

Fei-Fei Li, ‘Godmother Of AI,’ Points To Risks Of Cuts To US Research Funds, Student Visas
Semafor
May 21, 2025
media mention

Fei-Fei Li, co-director of Stanford HAI, emphasized the risks of cutting research funding and international student visas to the US as it faces an increasingly competitive global tech race.

Fei-Fei Li, ‘Godmother Of AI,’ Points To Risks Of Cuts To US Research Funds, Student Visas

Semafor
May 21, 2025

Fei-Fei Li, co-director of Stanford HAI, emphasized the risks of cutting research funding and international student visas to the US as it faces an increasingly competitive global tech race.

International Affairs, International Security, International Development
Regulation, Policy, Governance
media mention
AI Pioneer Fei-Fei Li Says AI Policy Must Be Based On ‘Science, Not Science Fiction’
TechCrunch
Feb 08, 2025
media mention

Fei-Fei Li, Co-Director of Stanford HAI, outlines “three fundamental principles for the future of AI policymaking” ahead of the AI Action Summit in Paris.

AI Pioneer Fei-Fei Li Says AI Policy Must Be Based On ‘Science, Not Science Fiction’

TechCrunch
Feb 08, 2025

Fei-Fei Li, Co-Director of Stanford HAI, outlines “three fundamental principles for the future of AI policymaking” ahead of the AI Action Summit in Paris.

Regulation, Policy, Governance
media mention
Now More Than Ever, AI Needs A Governance Framework
Financial Times
Feb 07, 2025
media mention

Fei Fei Li, Co-Director of Stanford HAI, stresses the importance of governance for AI technologies. 

Now More Than Ever, AI Needs A Governance Framework

Financial Times
Feb 07, 2025

Fei Fei Li, Co-Director of Stanford HAI, stresses the importance of governance for AI technologies. 

Regulation, Policy, Governance
media mention
Fei-Fei Li's Briefing to the United Nations Security Council
Fei-Fei Li
Quick ReadDec 19, 2024
testimony

In this address, presented to the United Nations Security Council meeting on "Maintenance of International Peace and Security and Artificial Intelligence," Fei-Fei Li stresses the importance of public sector leadership, global collaboration, and evidence-based policymaking to unlock AI’s potential and ensure its responsible development.

Fei-Fei Li's Briefing to the United Nations Security Council

Fei-Fei Li
Quick ReadDec 19, 2024

In this address, presented to the United Nations Security Council meeting on "Maintenance of International Peace and Security and Artificial Intelligence," Fei-Fei Li stresses the importance of public sector leadership, global collaboration, and evidence-based policymaking to unlock AI’s potential and ensure its responsible development.

International Affairs, International Security, International Development
testimony
Fei-Fei Li Says Understanding How The World Works Is The Next Step For AI
The Economist
Nov 20, 2024
media mention

Stanford HAI co-director Fei-Fei Li says the next frontier in AI lies in advancing spatial intelligence. In this op-ed, she explains how enabling machines to perceive and interact with the world in 3D can unlock human-centered AI applications for robotics, healthcare, education, and beyond.

Fei-Fei Li Says Understanding How The World Works Is The Next Step For AI

The Economist
Nov 20, 2024

Stanford HAI co-director Fei-Fei Li says the next frontier in AI lies in advancing spatial intelligence. In this op-ed, she explains how enabling machines to perceive and interact with the world in 3D can unlock human-centered AI applications for robotics, healthcare, education, and beyond.

Robotics
Healthcare
Education, Skills
media mention
Unlocking Public Sector AI Innovation: Next Steps for the National AI Research Resource
seminarOct 31, 20239:00 AM - 10:00 AM
October
31
2023

On Monday, October 30th 2023, President Biden signed a landmark Executive Order to manage the opportunities and risks of artificial intelligence.

October
31
2023

Unlocking Public Sector AI Innovation: Next Steps for the National AI Research Resource

Oct 31, 20239:00 AM - 10:00 AM

On Monday, October 30th 2023, President Biden signed a landmark Executive Order to manage the opportunities and risks of artificial intelligence.

Fei-Fei Li's Testimony Before the Senate Committee on Homeland Security and Governmental Affairs
Fei-Fei Li
Quick ReadSep 14, 2023
testimony

In this testimony presented to the Senate Committee on Homeland Security and Governmental Affairs, Fei-Fei Li urges the need to demystify AI, safeguard its use with privacy and fairness measures, and lead through transparent procurement and strong public AI research investment.

Fei-Fei Li's Testimony Before the Senate Committee on Homeland Security and Governmental Affairs

Fei-Fei Li
Quick ReadSep 14, 2023

In this testimony presented to the Senate Committee on Homeland Security and Governmental Affairs, Fei-Fei Li urges the need to demystify AI, safeguard its use with privacy and fairness measures, and lead through transparent procurement and strong public AI research investment.

Government, Public Administration
International Affairs, International Security, International Development
testimony
Generative AI: Perspectives from Stanford HAI
Russ Altman, Erik Brynjolfsson, Michele Elam, Surya Ganguli, Daniel E. Ho, James Landay, Curtis Langlotz, Fei-Fei Li, Percy Liang, Christopher Manning, Peter Norvig, Rob Reich, Vanessa Parli
Deep DiveMar 01, 2023
Research

A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world

Generative AI: Perspectives from Stanford HAI

Russ Altman, Erik Brynjolfsson, Michele Elam, Surya Ganguli, Daniel E. Ho, James Landay, Curtis Langlotz, Fei-Fei Li, Percy Liang, Christopher Manning, Peter Norvig, Rob Reich, Vanessa Parli
Deep DiveMar 01, 2023

A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world

Generative AI
Research
Assessing the accuracy of automatic speech recognition for psychotherapy
Adam Miner, Albert Haque, Jason Fries, Scott Fleming, Denise Wilfley, Terence Wilson, Arnold Milstein, Dan Jurafsky, Bruce Arnow, Stewart Agras, Fei-Fei Li, Nigam Shah
Dec 28, 2020
Research

Accurate transcription of audio recordings in psychotherapy would improve therapy effectiveness, clinician training, and safety monitoring. Although automatic speech recognition software is commercially available, its accuracy in mental health settings has not been well described. It is unclear which metrics and thresholds are appropriate for different clinical use cases, which may range from population descriptions to individual safety monitoring.

Assessing the accuracy of automatic speech recognition for psychotherapy

Adam Miner, Albert Haque, Jason Fries, Scott Fleming, Denise Wilfley, Terence Wilson, Arnold Milstein, Dan Jurafsky, Bruce Arnow, Stewart Agras, Fei-Fei Li, Nigam Shah
Dec 28, 2020

Accurate transcription of audio recordings in psychotherapy would improve therapy effectiveness, clinician training, and safety monitoring. Although automatic speech recognition software is commercially available, its accuracy in mental health settings has not been well described. It is unclear which metrics and thresholds are appropriate for different clinical use cases, which may range from population descriptions to individual safety monitoring.

Research
Representation Learning with Statistical Independence to Mitigate Bias
Ehsan Adeli, Qingyu Zhao, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Juan Carlos Niebles, Kilian Pohl
Dec 03, 2020
Research

Presence of bias (in datasets or tasks) is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in recent years. Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems. Controlling for all types of biases in the dataset curation stage is cumbersome and sometimes impossible. The alternative is to use the available data and build models incorporating fair representation learning. In this paper, we propose such a model based on adversarial training with two competing objectives to learn features that have (1) maximum discriminative power with respect to the task and (2) minimal statistical mean dependence with the protected (bias) variable(s). Our approach does so by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and the learned features. We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias). Our results show that the learned features by our method not only result in superior prediction performance but also are unbiased.

Representation Learning with Statistical Independence to Mitigate Bias

Ehsan Adeli, Qingyu Zhao, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Juan Carlos Niebles, Kilian Pohl
Dec 03, 2020

Presence of bias (in datasets or tasks) is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in recent years. Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems. Controlling for all types of biases in the dataset curation stage is cumbersome and sometimes impossible. The alternative is to use the available data and build models incorporating fair representation learning. In this paper, we propose such a model based on adversarial training with two competing objectives to learn features that have (1) maximum discriminative power with respect to the task and (2) minimal statistical mean dependence with the protected (bias) variable(s). Our approach does so by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and the learned features. We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias). Our results show that the learned features by our method not only result in superior prediction performance but also are unbiased.

Machine Learning
Research
Vision-based Estimation of MDS-UPDRS Gait Scoresfor Assessing Parkinson’s Disease Motor Severity
Mandy Lu, Kathleen Poston, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Kilian M. Pohl, Juan Carlos Niebles, Ehsan Adeli
Nov 18, 2020
Research

Parkinson’s disease (PD) is a progressive neurological disorder primarily affecting motor function resulting in tremor at rest, rigidity, bradykinesia, and postural instability. The physical severity of PD impairments can be quantified through the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS), a widely used clinical rating scale. Accurate and quantitative assessment of disease progression is critical to developing a treatment that slows or stops further advancement of the disease. Prior work has mainly focused on dopamine transport neuroimaging for diagnosis or costly and intrusive wearables evaluating motor impairments. For the first time, we propose a computer vision-based model that observes non-intrusive video recordings of individuals, extracts their 3D body skeletons, tracks them through time, and classifies the movements according to the MDS-UPDRS gait scores. Experimental results show that our proposed method performs significantly better than chance and competing methods with an F1-score of 0.83 and a balanced accuracy of 81%. This is the first benchmark for classifying PD patients based on MDS-UPDRS gait severity and could be an objective biomarker for disease severity. Our work demonstrates how computer-assisted technologies can be used to non-intrusively monitor patients and their motor impairments.

Vision-based Estimation of MDS-UPDRS Gait Scoresfor Assessing Parkinson’s Disease Motor Severity

Mandy Lu, Kathleen Poston, Adolf Pfefferbaum, Edith Sullivan, Fei-Fei Li, Kilian M. Pohl, Juan Carlos Niebles, Ehsan Adeli
Nov 18, 2020

Parkinson’s disease (PD) is a progressive neurological disorder primarily affecting motor function resulting in tremor at rest, rigidity, bradykinesia, and postural instability. The physical severity of PD impairments can be quantified through the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS), a widely used clinical rating scale. Accurate and quantitative assessment of disease progression is critical to developing a treatment that slows or stops further advancement of the disease. Prior work has mainly focused on dopamine transport neuroimaging for diagnosis or costly and intrusive wearables evaluating motor impairments. For the first time, we propose a computer vision-based model that observes non-intrusive video recordings of individuals, extracts their 3D body skeletons, tracks them through time, and classifies the movements according to the MDS-UPDRS gait scores. Experimental results show that our proposed method performs significantly better than chance and competing methods with an F1-score of 0.83 and a balanced accuracy of 81%. This is the first benchmark for classifying PD patients based on MDS-UPDRS gait severity and could be an objective biomarker for disease severity. Our work demonstrates how computer-assisted technologies can be used to non-intrusively monitor patients and their motor impairments.

Healthcare
Research
Domain Shift and Emerging Questions in Facial Recognition Technology
Daniel E. Ho, Emily Black, Maneesh Agrawala, Fei-Fei Li
Quick ReadNov 01, 2020
policy brief

This brief urges transparent, verifiable standards for facial-recognition systems and calls for a moratorium on government use until rigorous in-domain testing frameworks are established.

Domain Shift and Emerging Questions in Facial Recognition Technology

Daniel E. Ho, Emily Black, Maneesh Agrawala, Fei-Fei Li
Quick ReadNov 01, 2020

This brief urges transparent, verifiable standards for facial-recognition systems and calls for a moratorium on government use until rigorous in-domain testing frameworks are established.

Privacy, Safety, Security
Regulation, Policy, Governance
policy brief
1
2