RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
  1. >
  2. Home
  3. /Models
  4. /DeepSeek V2.5 236B
deepseek
236B parameters
Commercial OK
·Reviewed May 2026

DeepSeek V2.5 236B

DeepSeek V2.5 — merged V2 chat + Coder. Pre-V3 baseline; 21B active MoE.

License: DeepSeek License·Released Sep 5, 2024·Context: 131,072 tokens

Overview

DeepSeek V2.5 — merged V2 chat + Coder. Pre-V3 baseline; 21B active MoE.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Parent / base model
DeepSeek V3 (671B MoE)671B
Frontier
Family siblings (deepseek-v)
DeepSeek V3 Lite (16B MoE)16B
Consumer
DeepSeek V2.5 236B236B
You are here
DeepSeek V4 Flash (284B MoE)284B
Datacenter
DeepSeek V3 (671B MoE)671B
Frontier
DeepSeek V4745B
Frontier
DeepSeek V4 Pro (1.6T MoE)1600B
Frontier

Strengths

  • Multi-head latent attention
  • 21B active MoE

Weaknesses

  • V3 / V4 supersede for new deployments

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M134.0 GB160 GB

Get the model

HuggingFace

Original weights

huggingface.co/deepseek-ai/DeepSeek-V2.5

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of DeepSeek V2.5 236B.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd
AMD Instinct MI325X
256GB · amd
NVIDIA B200
192GB · nvidia
AMD Instinct MI300X
192GB · amd
NVIDIA H100 NVL
188GB · nvidia

Frequently asked

What's the minimum VRAM to run DeepSeek V2.5 236B?

160GB of VRAM is enough to run DeepSeek V2.5 236B at the Q4_K_M quantization (file size 134.0 GB). Higher-quality quantizations need more.

Can I use DeepSeek V2.5 236B commercially?

Yes — DeepSeek V2.5 236B ships under the DeepSeek License, which permits commercial use. Always read the license text before deployment.

What's the context length of DeepSeek V2.5 236B?

DeepSeek V2.5 236B supports a context window of 131,072 tokens (about 131K).

Source: huggingface.co/deepseek-ai/DeepSeek-V2.5

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • Dual 3090 vs RTX 5090 (48 GB or 32 GB) →
  • RTX 3090 vs RTX 4090 →
Buyer guides
  • 16 GB vs 24 GB VRAM — what 70B-class models need →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
  • AMD Instinct MI325X →
  • NVIDIA B200 →
  • AMD Instinct MI300X →
Alternatives
DeepSeek V4 Pro (1.6T MoE)DeepSeek V4 Flash (284B MoE)DeepSeek V3 (671B MoE)DeepSeek V4DeepSeek V3 Lite (16B MoE)
Before you buy

Verify DeepSeek V2.5 236B runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • DeepSeek V4 Pro (1.6T MoE)
    deepseek · 1600B
    unrated
  • Qwen 3.5 235B-A17B (MoE)
    qwen · 397B
    unrated
  • Qwen 3 235B-A22B
    qwen · 235B
    unrated
  • DeepSeek V4 Flash (284B MoE)
    deepseek · 284B
    unrated
Step up
More capable — bigger memory footprint
No verdicted models in the next tier up yet.
Step down
Smaller — faster, runs on weaker hardware
  • Llama 3.3 70B Instruct
    llama · 70B
    9.1/10
  • DeepSeek R1 Distill Llama 70B
    deepseek · 70B
    9.0/10