Context Window
The context window is the maximum number of tokens a model can attend to at once — both prompt and previously generated tokens. Llama 3.1 8B has 131,072 (128K). Llama 4 Scout has 10 million. Older models like the original GPT-3 had 2,048.
Bigger context windows aren't free. Memory grows linearly with context (KV cache scales with length), and attention compute grows quadratically without optimizations like Flash Attention or sparse attention. A model that "supports 128K" may run out of VRAM well before reaching that ceiling on consumer hardware.
For local inference, the practical question is rarely "does this model support long context" but "does my hardware have enough VRAM to actually use it." Use /will-it-run to compute the max context that fits on your specific hardware.
Related terms
Reviewed by Fredoline Eruo. See our editorial policy.