What is a Transformer? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is a Transformer?

A Transformer is a neural network architecture that revolutionized AI by using a mechanism called "attention" to process and understand relationships between all parts of input data simultaneously, rather than sequentially. Introduced in 2017, Transformers excel at capturing context and long-range dependencies in data, making them particularly powerful for language tasks. This architecture powers modern large language models like GPT, BERT, and Claude, and has expanded beyond text to applications in computer vision, protein folding, and other domains.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Transformer mentioned at Stanford HAI

Explore Similar Terms:

Generative pre-trained transformer (GPT) | Foundation Model | Foundation Model

See Full List of Terms & Definitions

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.
A Generative Model for Raw Audio using Transformer Architectures
Prateek Verma, Chris Chafe
Dec 20
Research
Your browser does not support the video tag.

A Generative Model for Raw Audio using Transformer Architectures

A Generative Model for Raw Audio using Transformer Architectures

Prateek Verma, Chris Chafe
Dec 20

A Generative Model for Raw Audio using Transformer Architectures

Your browser does not support the video tag.
Research
Finding Monosemantic Subspaces and Human-Compatible Interpretations in Vision Transformers through Sparse Coding
Romeo Valentin, Vikas Sindhwan, Summeet Singh, Vincent Vanhoucke, Mykel Kochenderfer
Jan 01
Research
Your browser does not support the video tag.

We present a new method of deconstructing class activation tokens of vision transformers into a new, overcomplete basis, where each basis vector is “monosemantic” and affiliated with a single, human-compatible conceptual description. We achieve this through the use of a highly optimized and customized version of the K-SVD algorithm, which we call Double-Batch K-SVD (DBK-SVD). We demonstrate the efficacy of our approach on the sbucaptions dataset, using CLIP embeddings and comparing our results to a Sparse Autoencoder (SAE) baseline. Our method significantly outperforms SAE in terms of reconstruction loss, recovering approximately 2/3 of the original signal compared to 1/6 for SAE. We introduce novel metrics for evaluating explanation faithfulness and specificity, showing that DBK-SVD produces more diverse and specific concept descriptions. We therefore show empirically for the first time that disentangling of concepts arising in Vision Transformers is possible, a statement that has previously been questioned when applying an additional sparsity constraint. Our research opens new avenues for model interpretability, failure mitigation, and downstream task domain transfer in vision transformer models. An interactive demo showcasing our results can be found at https://disentangling-sbucaptions.xyz, and we make our DBK-SVD implementation openly available at https://github.com/RomeoV/KSVD.jl.

Finding Monosemantic Subspaces and Human-Compatible Interpretations in Vision Transformers through Sparse Coding

Romeo Valentin, Vikas Sindhwan, Summeet Singh, Vincent Vanhoucke, Mykel Kochenderfer
Jan 01

We present a new method of deconstructing class activation tokens of vision transformers into a new, overcomplete basis, where each basis vector is “monosemantic” and affiliated with a single, human-compatible conceptual description. We achieve this through the use of a highly optimized and customized version of the K-SVD algorithm, which we call Double-Batch K-SVD (DBK-SVD). We demonstrate the efficacy of our approach on the sbucaptions dataset, using CLIP embeddings and comparing our results to a Sparse Autoencoder (SAE) baseline. Our method significantly outperforms SAE in terms of reconstruction loss, recovering approximately 2/3 of the original signal compared to 1/6 for SAE. We introduce novel metrics for evaluating explanation faithfulness and specificity, showing that DBK-SVD produces more diverse and specific concept descriptions. We therefore show empirically for the first time that disentangling of concepts arising in Vision Transformers is possible, a statement that has previously been questioned when applying an additional sparsity constraint. Our research opens new avenues for model interpretability, failure mitigation, and downstream task domain transfer in vision transformer models. An interactive demo showcasing our results can be found at https://disentangling-sbucaptions.xyz, and we make our DBK-SVD implementation openly available at https://github.com/RomeoV/KSVD.jl.

Computer Vision
Your browser does not support the video tag.
Research
A Composer’s Helper: Using AI to Create New Harmonies
Shana Lynch
Dec 04
news

The Anticipatory Music Transformer assists composers in a way most generative AI cannot.

A Composer’s Helper: Using AI to Create New Harmonies

Shana Lynch
Dec 04

The Anticipatory Music Transformer assists composers in a way most generative AI cannot.

Arts, Humanities
news
LABOR-LLM: Language-Based Occupational Representations with Large Language Models
Susan Athey, Herman Brunborg, Tianyu Du, Ayush Kanodia, Keyon Vafa
Dec 11
Research
Your browser does not support the video tag.

Vafa et al. (2024) introduced a transformer-based econometric model, CAREER, that predicts a worker’s next job as a function of career history (an “occupation model”). CAREER was initially estimated (“pre-trained”) using a large, unrepresentative resume dataset, which served as a “foundation model,” and parameter estimation was continued (“fine-tuned”) using data from a representative survey. CAREER had better predictive performance than benchmarks. This paper considers an alternative where the resume-based foundation model is replaced by a large language model (LLM). We convert tabular data from the survey into text files that resemble resumes and fine-tune the LLMs using these text files with the objective to predict the next token (word). The resulting fine-tuned LLM is used as an input to an occupation model. Its predictive performance surpasses all prior models. We demonstrate the value of fine-tuning and further show that by adding more career data from a different population, fine-tuning smaller LLMs surpasses the performance of fine-tuning larger models.

LABOR-LLM: Language-Based Occupational Representations with Large Language Models

Susan Athey, Herman Brunborg, Tianyu Du, Ayush Kanodia, Keyon Vafa
Dec 11

Vafa et al. (2024) introduced a transformer-based econometric model, CAREER, that predicts a worker’s next job as a function of career history (an “occupation model”). CAREER was initially estimated (“pre-trained”) using a large, unrepresentative resume dataset, which served as a “foundation model,” and parameter estimation was continued (“fine-tuned”) using data from a representative survey. CAREER had better predictive performance than benchmarks. This paper considers an alternative where the resume-based foundation model is replaced by a large language model (LLM). We convert tabular data from the survey into text files that resemble resumes and fine-tune the LLMs using these text files with the objective to predict the next token (word). The resulting fine-tuned LLM is used as an input to an occupation model. Its predictive performance surpasses all prior models. We demonstrate the value of fine-tuning and further show that by adding more career data from a different population, fine-tuning smaller LLMs surpasses the performance of fine-tuning larger models.

Foundation Models
Natural Language Processing
Your browser does not support the video tag.
Research
Stanford’s Multimodal AI Model Advances Personalized Cancer Care
Vignesh Ramachandran
Jan 27
news

The MUSK model combines clinical notes and images to predict prognosis and immunotherapy response.

Stanford’s Multimodal AI Model Advances Personalized Cancer Care

Vignesh Ramachandran
Jan 27

The MUSK model combines clinical notes and images to predict prognosis and immunotherapy response.

Healthcare
news
Unlocking New Frontiers: AI and the Sciences
Shana Lynch
Nov 27
news

At Stanford HAI’s recent fall conference, scholars showed how artificial intelligence is opening up new approaches to studying science. 

Unlocking New Frontiers: AI and the Sciences

Shana Lynch
Nov 27

At Stanford HAI’s recent fall conference, scholars showed how artificial intelligence is opening up new approaches to studying science. 

Machine Learning
news