Won't fit

Running Llama 3.3 70B Instruct on NVIDIA GeForce RTX 4090

Llama 3.3 70B Instruct requires more memory than NVIDIA GeForce RTX 4090 provides (24 GB available).

By Fredoline Eruo·Last verified May 6, 2026

Model size

Memory available

Recommended quant

Highest quality that fits

Variants and what fits

QuantizationFile sizeVRAM requiredFits on NVIDIA GeForce RTX 4090?
Q4_K_M40.0 GB48 GB
No
Q5_K_M47.0 GB56 GB
No
Q8_070.0 GB80 GB
No

Real benchmarks

ToolQuantContexttok/sVRAM usedSource
OllamaQ4_K_M8,19214.8 tok/s23.4 GB
community

Frequently asked

Can NVIDIA GeForce RTX 4090 run Llama 3.3 70B Instruct?

Llama 3.3 70B Instruct requires more memory than NVIDIA GeForce RTX 4090 provides (24 GB available).

What quantization should I use?

No quantization of Llama 3.3 70B Instruct fits on NVIDIA GeForce RTX 4090. Pick a smaller model.

How fast will it be?

Measured at 14.8 tok/s on this combination (community-sourced).

See also: Llama 3.3 70B Instruct, NVIDIA GeForce RTX 4090, all benchmarks.

Reviewed by RunLocalAI Editorial. See our editorial policy.