Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Computer Vision | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Computer Vision

Computer vision is enhancing machines’ ability to interpret and act on visual data, transforming sectors like healthcare, security, and manufacturing.

Fei-Fei Li Wins Queen Elizabeth Prize for Engineering
Shana Lynch
Nov 07, 2025
News

The Stanford HAI co-founder is recognized for breakthroughs that propelled computer vision and deep learning, and for championing human-centered AI and industry innovation.

News

Fei-Fei Li Wins Queen Elizabeth Prize for Engineering

Shana Lynch
Computer VisionMachine LearningNov 07

The Stanford HAI co-founder is recognized for breakthroughs that propelled computer vision and deep learning, and for championing human-centered AI and industry innovation.

Finding Monosemantic Subspaces and Human-Compatible Interpretations in Vision Transformers through Sparse Coding
Romeo Valentin, Vikas Sindhwan, Summeet Singh, Vincent Vanhoucke, Mykel Kochenderfer
Jan 01, 2025
Research
Your browser does not support the video tag.

We present a new method of deconstructing class activation tokens of vision transformers into a new, overcomplete basis, where each basis vector is “monosemantic” and affiliated with a single, human-compatible conceptual description. We achieve this through the use of a highly optimized and customized version of the K-SVD algorithm, which we call Double-Batch K-SVD (DBK-SVD). We demonstrate the efficacy of our approach on the sbucaptions dataset, using CLIP embeddings and comparing our results to a Sparse Autoencoder (SAE) baseline. Our method significantly outperforms SAE in terms of reconstruction loss, recovering approximately 2/3 of the original signal compared to 1/6 for SAE. We introduce novel metrics for evaluating explanation faithfulness and specificity, showing that DBK-SVD produces more diverse and specific concept descriptions. We therefore show empirically for the first time that disentangling of concepts arising in Vision Transformers is possible, a statement that has previously been questioned when applying an additional sparsity constraint. Our research opens new avenues for model interpretability, failure mitigation, and downstream task domain transfer in vision transformer models. An interactive demo showcasing our results can be found at https://disentangling-sbucaptions.xyz, and we make our DBK-SVD implementation openly available at https://github.com/RomeoV/KSVD.jl.

Research
Your browser does not support the video tag.

Finding Monosemantic Subspaces and Human-Compatible Interpretations in Vision Transformers through Sparse Coding

Romeo Valentin, Vikas Sindhwan, Summeet Singh, Vincent Vanhoucke, Mykel Kochenderfer
Computer VisionJan 01

We present a new method of deconstructing class activation tokens of vision transformers into a new, overcomplete basis, where each basis vector is “monosemantic” and affiliated with a single, human-compatible conceptual description. We achieve this through the use of a highly optimized and customized version of the K-SVD algorithm, which we call Double-Batch K-SVD (DBK-SVD). We demonstrate the efficacy of our approach on the sbucaptions dataset, using CLIP embeddings and comparing our results to a Sparse Autoencoder (SAE) baseline. Our method significantly outperforms SAE in terms of reconstruction loss, recovering approximately 2/3 of the original signal compared to 1/6 for SAE. We introduce novel metrics for evaluating explanation faithfulness and specificity, showing that DBK-SVD produces more diverse and specific concept descriptions. We therefore show empirically for the first time that disentangling of concepts arising in Vision Transformers is possible, a statement that has previously been questioned when applying an additional sparsity constraint. Our research opens new avenues for model interpretability, failure mitigation, and downstream task domain transfer in vision transformer models. An interactive demo showcasing our results can be found at https://disentangling-sbucaptions.xyz, and we make our DBK-SVD implementation openly available at https://github.com/RomeoV/KSVD.jl.

Using AI to Understand Residential Solar Power
Zhecheng Wang, Marie-Louise Arlt, Chad Zanocco, Arun Majumdar, Ram Rajagopal
Quick ReadSep 28, 2023
Policy Brief

This brief introduces a computer-vision approach to analyzing solar panel adoption in U.S. households that can help policymakers tailor incentive mechanisms.

Policy Brief

Using AI to Understand Residential Solar Power

Zhecheng Wang, Marie-Louise Arlt, Chad Zanocco, Arun Majumdar, Ram Rajagopal
Energy, EnvironmentComputer VisionQuick ReadSep 28

This brief introduces a computer-vision approach to analyzing solar panel adoption in U.S. households that can help policymakers tailor incentive mechanisms.

Saw, Sword, or Shovel: AI Spots Functional Similarities Between Disparate Objects
Andrew Myers
Oct 13, 2025
News

With a new computer vision model that recognizes the real-world utility of objects in images, researchers at Stanford look to push the boundaries of robotics and AI.

News

Saw, Sword, or Shovel: AI Spots Functional Similarities Between Disparate Objects

Andrew Myers
RoboticsComputer VisionOct 13

With a new computer vision model that recognizes the real-world utility of objects in images, researchers at Stanford look to push the boundaries of robotics and AI.

ReMix: Optimizing Data Mixtures for Large Scale Imitation Learning
Joey Hejna, Chethan Anand Bhateja, Yichen Jiang, Karl Pertsch, Dorsa Sadigh
Sep 05, 2024
Research
Your browser does not support the video tag.

Increasingly large robotics datasets are being collected to train larger foundation models in robotics. However, despite the fact that data selection has been of utmost importance to scaling in vision and natural language processing (NLP), little work in robotics has questioned what data such models should actually be trained on. In this work we investigate how to weigh different subsets or "domains'' of robotics datasets during pre-training to maximize worst-case performance across all possible downstream domains using distributionally robust optimization (DRO). Unlike in NLP, we find that these methods are hard to apply out of the box due to varying action spaces and dynamics across robots. Our method, ReMix, employs early stopping and action normalization and discretization to counteract these issues. Through extensive experimentation on both the Bridge and OpenX datasets, we demonstrate that data curation can have an outsized impact on downstream performance. Specifically, domain weights learned by ReMix outperform uniform weights by over 40% on average and human-selected weights by over 20% on datasets used to train the RT-X models.

Research
Your browser does not support the video tag.

ReMix: Optimizing Data Mixtures for Large Scale Imitation Learning

Joey Hejna, Chethan Anand Bhateja, Yichen Jiang, Karl Pertsch, Dorsa Sadigh
Computer VisionRoboticsNatural Language ProcessingSep 05

Increasingly large robotics datasets are being collected to train larger foundation models in robotics. However, despite the fact that data selection has been of utmost importance to scaling in vision and natural language processing (NLP), little work in robotics has questioned what data such models should actually be trained on. In this work we investigate how to weigh different subsets or "domains'' of robotics datasets during pre-training to maximize worst-case performance across all possible downstream domains using distributionally robust optimization (DRO). Unlike in NLP, we find that these methods are hard to apply out of the box due to varying action spaces and dynamics across robots. Our method, ReMix, employs early stopping and action normalization and discretization to counteract these issues. Through extensive experimentation on both the Bridge and OpenX datasets, we demonstrate that data curation can have an outsized impact on downstream performance. Specifically, domain weights learned by ReMix outperform uniform weights by over 40% on average and human-selected weights by over 20% on datasets used to train the RT-X models.

Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains
Daniel E. Ho, Emily Black, Maneesh Agrawala, Fei-Fei Li
Deep DiveNov 01, 2020
White Paper

This white paper provides research- and scientifically-grounded recommendations for how to give context to calls for testing the operational accuracy of facial recognition technology.

White Paper

Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains

Daniel E. Ho, Emily Black, Maneesh Agrawala, Fei-Fei Li
Computer VisionRegulation, Policy, GovernanceDeep DiveNov 01

This white paper provides research- and scientifically-grounded recommendations for how to give context to calls for testing the operational accuracy of facial recognition technology.

All Work Published on Computer Vision

Ambient Intelligence, Human Impact
May 07, 2025
News

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.

Ambient Intelligence, Human Impact

May 07, 2025

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.

Healthcare
Computer Vision
News
From Brain to Machine: The Unexpected Journey of Neural Networks
Katharine Miller
Nov 18, 2024
News

How early cognitive research funded by the NSF paved the way for today’s AI breakthroughs—and how AI is now inspiring new understandings of the human mind.

From Brain to Machine: The Unexpected Journey of Neural Networks

Katharine Miller
Nov 18, 2024

How early cognitive research funded by the NSF paved the way for today’s AI breakthroughs—and how AI is now inspiring new understandings of the human mind.

Machine Learning
Computer Vision
News
On GPS: The Birth Of Modern Artificial Intelligence
CNN
Sep 01, 2024
Media Mention

Fareed Zakaria speaks with “Godmother of AI” Fei-Fei Li about her journey as a computer scientist and how it influenced the discovery of modern AI.

On GPS: The Birth Of Modern Artificial Intelligence

CNN
Sep 01, 2024

Fareed Zakaria speaks with “Godmother of AI” Fei-Fei Li about her journey as a computer scientist and how it influenced the discovery of modern AI.

Computer Vision
Robotics
Machine Learning
Media Mention
Beyond Algorithms: The Human Faces Driving Machine Learning Forward
Tech Times
Jul 25, 2024
Media Mention

HAI Co-Director Fei-Fei Li is recognized for her commitment to ethical AI and interdisciplinary research, continuing to shape the future of AI development and application.

Beyond Algorithms: The Human Faces Driving Machine Learning Forward

Tech Times
Jul 25, 2024

HAI Co-Director Fei-Fei Li is recognized for her commitment to ethical AI and interdisciplinary research, continuing to shape the future of AI development and application.

Ethics, Equity, Inclusion
Computer Vision
Media Mention
Peering into the Black Box of AI Medical Programs
Adam Hadhazy
Feb 06, 2024
News

To realize the benefits of AI in detecting diseases such as skin cancer, doctors need to trust in the decisions rendered by AI. That requires better understanding of its internal reasoning.

Peering into the Black Box of AI Medical Programs

Adam Hadhazy
Feb 06, 2024

To realize the benefits of AI in detecting diseases such as skin cancer, doctors need to trust in the decisions rendered by AI. That requires better understanding of its internal reasoning.

Computer Vision
News
Meet 12 Asteroid Shots in AI
Shana Lynch
Dec 11, 2023
News

Stanford scholars explore advances in foundation models, explore the next-generation chip, and study causal models at the recent Hoffman-Yee Symposium.

Meet 12 Asteroid Shots in AI

Shana Lynch
Dec 11, 2023

Stanford scholars explore advances in foundation models, explore the next-generation chip, and study causal models at the recent Hoffman-Yee Symposium.

Machine Learning
Computer Vision
News
1
2