What is RAG (Retrieval-Augmented Generation)? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is RAG (Retrieval-Augmented Generation)?

Retrieval-Augmented Generation (RAG) is a technique that helps language models generate higher-quality outputs by allowing them to look up external, up-to-date information first. Before generating a response, the model retrieves relevant facts from a specific knowledge source, like a company's internal documents or the live internet. This retrieved information is then used to create a more accurate and detailed answer, reducing the chances of the model providing incorrect or outdated information.

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


RAG (Retrieval-Augmented Generation) mentioned at Stanford HAI

Explore Similar Terms:

Vector Database | Hallucination (in AI) | Large Language Model (LLM)

See Full List of Terms & Definitions

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.
AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries
Faiz Surani, Daniel E. Ho
May 23
news

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries

Faiz Surani, Daniel E. Ho
May 23

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

news
Generating Medical Errors: GenAI and Erroneous Medical References
Kevin Wu, Eric Wu, Daniel E. Ho, James Zou
Feb 12
news

A new study finds that large language models used widely for medical assessments cannot back up claims.

Generating Medical Errors: GenAI and Erroneous Medical References

Kevin Wu, Eric Wu, Daniel E. Ho, James Zou
Feb 12

A new study finds that large language models used widely for medical assessments cannot back up claims.

Healthcare
news
Language Models in the Classroom: Bridging the Gap Between Technology and Teaching
Instructors and students of CS293
Apr 09
news

Instructors and students from Stanford class CS293/EDUC473 address the failures of current educational technologies and outline how to empower both teachers and learners through collaborative innovation.

Language Models in the Classroom: Bridging the Gap Between Technology and Teaching

Instructors and students of CS293
Apr 09

Instructors and students from Stanford class CS293/EDUC473 address the failures of current educational technologies and outline how to empower both teachers and learners through collaborative innovation.

Education, Skills
Generative AI
Natural Language Processing
news