DeepSeek V4
DeepSeek's spring 2026 frontier MoE. 745B total / 38B active. The current open-weight benchmark leader on coding + math; closes the gap with closed-source flagships on reasoning.
Overview
DeepSeek's spring 2026 frontier MoE. 745B total / 38B active. The current open-weight benchmark leader on coding + math; closes the gap with closed-source flagships on reasoning.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Open-weight benchmark leader (May 2026)
- 38B active params keep inference practical
- Strong on coding + math
Weaknesses
- 745B at any quant requires multi-node cluster
- Not single-machine deployable
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 425.0 GB | 480 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of DeepSeek V4.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run DeepSeek V4?
Can I use DeepSeek V4 commercially?
What's the context length of DeepSeek V4?
Source: huggingface.co/deepseek-ai/DeepSeek-V4
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.