Fits comfortably
Running Mistral 7B Instruct v0.3 on NVIDIA GeForce RTX 4090
NVIDIA GeForce RTX 4090 runs Mistral 7B Instruct v0.3 comfortably at Q5_K_M with 17 GB of headroom for context.
Model size
7B params
Mistral 7B Instruct v0.3 →Memory available
Recommended quant
Q5_K_M
Highest quality that fits
Quick start with Ollama
1. Install
ollama pull mistral:7b2. Run
ollama run mistral:7bDefault quant in Ollama is Q4_K_M. To use a different quant, append it: mistral:7b-q5_K_M.
Variants and what fits
| Quantization | File size | VRAM required | Fits on NVIDIA GeForce RTX 4090? |
|---|---|---|---|
| Q4_K_M | 4.4 GB | 6 GB | Yes |
| Q5_K_M | 5.1 GB | 7 GB | Yes |
Real benchmarks
| Tool | Quant | Context | tok/s | VRAM used | Source |
|---|---|---|---|---|---|
| Ollama | Q4_K_M | 4,096 | 112.3 tok/s | 5.1 GB | owner |
Frequently asked
Can NVIDIA GeForce RTX 4090 run Mistral 7B Instruct v0.3?
NVIDIA GeForce RTX 4090 runs Mistral 7B Instruct v0.3 comfortably at Q5_K_M with 17 GB of headroom for context.
What quantization should I use?
Q5_K_M is the highest-quality variant of Mistral 7B Instruct v0.3 that fits in 24 GB VRAM. Lower-bit quants will be smaller but lose some quality.
How fast will it be?
Measured at 112.3 tok/s on this combination in our testing.
See also: Mistral 7B Instruct v0.3, NVIDIA GeForce RTX 4090, all benchmarks.
Reviewed by RunLocalAI Editorial. See our editorial policy.