Fits comfortably
Running Phi-4 14B on NVIDIA GeForce RTX 4060 Ti 16GB
NVIDIA GeForce RTX 4060 Ti 16GB runs Phi-4 14B comfortably at Q4_K_M with 5 GB of headroom for context.
Model size
14B params
Phi-4 14B →Memory available
Recommended quant
Q4_K_M
Highest quality that fits
Quick start with Ollama
1. Install
ollama pull phi4:14b2. Run
ollama run phi4:14bDefault quant in Ollama is Q4_K_M. To use a different quant, append it: phi4:14b-q5_K_M.
Variants and what fits
| Quantization | File size | VRAM required | Fits on NVIDIA GeForce RTX 4060 Ti 16GB? |
|---|---|---|---|
| Q4_K_M | 8.4 GB | 11 GB | Yes |
| Q8_0 | 15.0 GB | 18 GB | No |
Real benchmarks
| Tool | Quant | Context | tok/s | VRAM used | Source |
|---|---|---|---|---|---|
| Ollama | Q4_K_M | 8,192 | 28.3 tok/s | 9.4 GB | community |
Frequently asked
Can NVIDIA GeForce RTX 4060 Ti 16GB run Phi-4 14B?
NVIDIA GeForce RTX 4060 Ti 16GB runs Phi-4 14B comfortably at Q4_K_M with 5 GB of headroom for context.
What quantization should I use?
Q4_K_M is the highest-quality variant of Phi-4 14B that fits in 16 GB VRAM. Lower-bit quants will be smaller but lose some quality.
How fast will it be?
Measured at 28.3 tok/s on this combination (community-sourced).
See also: Phi-4 14B, NVIDIA GeForce RTX 4060 Ti 16GB, all benchmarks.
Reviewed by RunLocalAI Editorial. See our editorial policy.