What is a Context Window? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is a Context Window?

Context Window is the is the amount of input (like text) that an AI system can process at one time during a task. Think of it as a model’s short-term memory. A model can only consider so much text when generating its next response, and once the conversation exceeds its context window, the model starts “forgetting” prior responses. 

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Context Window mentioned at Stanford HAI

Explore Similar Terms:

Large Language Model (LLM) | Transformer | Inference

See Full List of Terms & Definitions

When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking
Katie Gray Garrison
Jul 28
news

A new study on generative AI argues that addressing biases requires a deeper exploration of ontological assumptions, challenging the way we define fundamental concepts like humanity and connection.

When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking

Katie Gray Garrison
Jul 28

A new study on generative AI argues that addressing biases requires a deeper exploration of ontological assumptions, challenging the way we define fundamental concepts like humanity and connection.

Ethics, Equity, Inclusion
Generative AI
news
Exploring the Dangers of AI in Mental Health Care
Sarah Wells
Jun 11
news
Young woman holds up phone to her face

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

Exploring the Dangers of AI in Mental Health Care

Sarah Wells
Jun 11

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

Healthcare
Generative AI
Young woman holds up phone to her face
news
An AI Social Coach Is Teaching Empathy to People with Autism
Sarah Wells
Aug 13
news

A specialized chatbot named Noora is helping individuals with autism spectrum disorder practice their social skills on demand.

An AI Social Coach Is Teaching Empathy to People with Autism

Sarah Wells
Aug 13

A specialized chatbot named Noora is helping individuals with autism spectrum disorder practice their social skills on demand.

Healthcare
Natural Language Processing
Generative AI
news

Enroll in a Human-Centered AI Course

This HAI program covers technical fundamentals, business implications, and societal considerations.