What is Few-Shot Learning? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is Few-Shot Learning?

Few-Shot Learning is a machine learning approach where models can learn to recognize or perform new tasks with only a small number of training examples. This contrasts with traditional deep learning methods that typically require thousands or millions of labeled examples, making few-shot learning particularly valuable when data is scarce, expensive to collect, or time-consuming to label.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Few-Shot Learning mentioned at Stanford HAI

Explore Similar Terms:

Zero-Shot Learning | Transfer Learning | Data Augmentation

See Full List of Terms & Definitions

Enroll in a Human-Centered AI Course

This HAI program covers technical fundamentals, business implications, and societal considerations.
General Few-shot Learners for Image Understanding and Generation
Abhishek Sinha, Jiaming Song, Chenlin Meng, Stefano Ermon
Dec 20
Research
Your browser does not support the video tag.

General Few-shot Learners for Image Understanding and Generation

General Few-shot Learners for Image Understanding and Generation

Abhishek Sinha, Jiaming Song, Chenlin Meng, Stefano Ermon
Dec 20

General Few-shot Learners for Image Understanding and Generation

Your browser does not support the video tag.
Research
WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
Sina Semnani, Violet Yao, Monica Lam, Heidi Zhang
Dec 01
Research
Your browser does not support the video tag.

This paper presents the first few-shot LLM-based chatbot that almost never hallucinates and has high conversationality and low latency. WikiChat is grounded on the English Wikipedia, the largest curated free-text corpus. WikiChat generates a response from an LLM, retains only the grounded facts, and combines them with additional information it retrieves from the corpus to form factual and engaging responses. We distill WikiChat based on GPT-4 into a 7B-parameter LLaMA model with minimal loss of quality, to significantly improve its latency, cost and privacy, and facilitate research and deployment. Using a novel hybrid human-and-LLM evaluation methodology, we show that our best system achieves 97.3% factual accuracy in simulated conversations. It significantly outperforms all retrieval-based and LLM-based baselines, and by 3.9%, 38.6% and 51.0% on head, tail and recent knowledge compared to GPT-4. Compared to previous state-of-the-art retrieval-based chatbots, WikiChat is also significantly more informative and engaging, just like an LLM. WikiChat achieves 97.9% factual accuracy in conversations with human users about recent topics, 55.0% better than GPT-4, while receiving significantly higher user ratings and more favorable comments.

WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

Sina Semnani, Violet Yao, Monica Lam, Heidi Zhang
Dec 01

This paper presents the first few-shot LLM-based chatbot that almost never hallucinates and has high conversationality and low latency. WikiChat is grounded on the English Wikipedia, the largest curated free-text corpus. WikiChat generates a response from an LLM, retains only the grounded facts, and combines them with additional information it retrieves from the corpus to form factual and engaging responses. We distill WikiChat based on GPT-4 into a 7B-parameter LLaMA model with minimal loss of quality, to significantly improve its latency, cost and privacy, and facilitate research and deployment. Using a novel hybrid human-and-LLM evaluation methodology, we show that our best system achieves 97.3% factual accuracy in simulated conversations. It significantly outperforms all retrieval-based and LLM-based baselines, and by 3.9%, 38.6% and 51.0% on head, tail and recent knowledge compared to GPT-4. Compared to previous state-of-the-art retrieval-based chatbots, WikiChat is also significantly more informative and engaging, just like an LLM. WikiChat achieves 97.9% factual accuracy in conversations with human users about recent topics, 55.0% better than GPT-4, while receiving significantly higher user ratings and more favorable comments.

Natural Language Processing
Foundation Models
Machine Learning
Generative AI
Your browser does not support the video tag.
Research
Generative AI: Perspectives from Stanford HAI
Russ Altman, Erik Brynjolfsson, Michele Elam, Surya Ganguli, Daniel E. Ho, James Landay, Curtis Langlotz, Fei-Fei Li, Percy Liang, Christopher Manning, Peter Norvig, Rob Reich, Vanessa Parli
Deep DiveMar 01
Research

A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world

Generative AI: Perspectives from Stanford HAI

Russ Altman, Erik Brynjolfsson, Michele Elam, Surya Ganguli, Daniel E. Ho, James Landay, Curtis Langlotz, Fei-Fei Li, Percy Liang, Christopher Manning, Peter Norvig, Rob Reich, Vanessa Parli
Deep DiveMar 01

A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world

Generative AI
Research
Video Pose Distillation for Few-Shot, Fine-Grained Sports Action Recognition
James Hong, Matthew Fisher, Michaël Gharbi, Kayvon Fatahalian
Jan 01
Research
Your browser does not support the video tag.

Video Pose Distillation for Few-Shot, Fine-Grained Sports Action Recognition

Video Pose Distillation for Few-Shot, Fine-Grained Sports Action Recognition

James Hong, Matthew Fisher, Michaël Gharbi, Kayvon Fatahalian
Jan 01

Video Pose Distillation for Few-Shot, Fine-Grained Sports Action Recognition

Your browser does not support the video tag.
Research