Fits comfortably

Running Phi-4 14B on NVIDIA GeForce RTX 4060 Ti 16GB

NVIDIA GeForce RTX 4060 Ti 16GB runs Phi-4 14B comfortably at Q4_K_M with 5 GB of headroom for context.

By Fredoline Eruo·Last verified May 6, 2026

Model size

14B params
Phi-4 14B

Recommended quant

Q4_K_M
Highest quality that fits

Quick start with Ollama

1. Install
ollama pull phi4:14b
2. Run
ollama run phi4:14b

Default quant in Ollama is Q4_K_M. To use a different quant, append it: phi4:14b-q5_K_M.

Variants and what fits

QuantizationFile sizeVRAM requiredFits on NVIDIA GeForce RTX 4060 Ti 16GB?
Q4_K_M8.4 GB11 GB
Yes
Q8_015.0 GB18 GB
No

Real benchmarks

ToolQuantContexttok/sVRAM usedSource
OllamaQ4_K_M8,19228.3 tok/s9.4 GB
community

Frequently asked

Can NVIDIA GeForce RTX 4060 Ti 16GB run Phi-4 14B?

NVIDIA GeForce RTX 4060 Ti 16GB runs Phi-4 14B comfortably at Q4_K_M with 5 GB of headroom for context.

What quantization should I use?

Q4_K_M is the highest-quality variant of Phi-4 14B that fits in 16 GB VRAM. Lower-bit quants will be smaller but lose some quality.

How fast will it be?

Measured at 28.3 tok/s on this combination (community-sourced).

See also: Phi-4 14B, NVIDIA GeForce RTX 4060 Ti 16GB, all benchmarks.

Reviewed by RunLocalAI Editorial. See our editorial policy.