deepseek
236B parameters
Commercial OK

DeepSeek Coder V2 236B

Full DeepSeek Coder V2. 236B total / 21B active MoE coder.

License: DeepSeek License·Released Jun 17, 2024·Context: 131,072 tokens

Overview

Full DeepSeek Coder V2. 236B total / 21B active MoE coder.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Family siblings (deepseek-coder)

Strengths

  • MoE coding at frontier scale

Weaknesses

  • Cluster-only deployment

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M134.0 GB160 GB

Get the model

HuggingFace

Original weights

huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of DeepSeek Coder V2 236B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Step up
More capable — bigger memory footprint
No verdicted models in the next tier up yet.

Frequently asked

What's the minimum VRAM to run DeepSeek Coder V2 236B?

160GB of VRAM is enough to run DeepSeek Coder V2 236B at the Q4_K_M quantization (file size 134.0 GB). Higher-quality quantizations need more.

Can I use DeepSeek Coder V2 236B commercially?

Yes — DeepSeek Coder V2 236B ships under the DeepSeek License, which permits commercial use. Always read the license text before deployment.

What's the context length of DeepSeek Coder V2 236B?

DeepSeek Coder V2 236B supports a context window of 131,072 tokens (about 131K).

Source: huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.