Fits comfortably

Running Mistral 7B Instruct v0.3 on NVIDIA GeForce RTX 4090

NVIDIA GeForce RTX 4090 runs Mistral 7B Instruct v0.3 comfortably at Q5_K_M with 17 GB of headroom for context.

By Fredoline Eruo·Last verified May 6, 2026

Model size

Memory available

Recommended quant

Q5_K_M
Highest quality that fits

Quick start with Ollama

1. Install
ollama pull mistral:7b
2. Run
ollama run mistral:7b

Default quant in Ollama is Q4_K_M. To use a different quant, append it: mistral:7b-q5_K_M.

Variants and what fits

QuantizationFile sizeVRAM requiredFits on NVIDIA GeForce RTX 4090?
Q4_K_M4.4 GB6 GB
Yes
Q5_K_M5.1 GB7 GB
Yes

Real benchmarks

ToolQuantContexttok/sVRAM usedSource
OllamaQ4_K_M4,096112.3 tok/s5.1 GB
owner

Frequently asked

Can NVIDIA GeForce RTX 4090 run Mistral 7B Instruct v0.3?

NVIDIA GeForce RTX 4090 runs Mistral 7B Instruct v0.3 comfortably at Q5_K_M with 17 GB of headroom for context.

What quantization should I use?

Q5_K_M is the highest-quality variant of Mistral 7B Instruct v0.3 that fits in 24 GB VRAM. Lower-bit quants will be smaller but lose some quality.

How fast will it be?

Measured at 112.3 tok/s on this combination in our testing.

See also: Mistral 7B Instruct v0.3, NVIDIA GeForce RTX 4090, all benchmarks.

Reviewed by RunLocalAI Editorial. See our editorial policy.