qwen
3B parameters
Commercial OK

Qwen 2.5 Coder 3B

Compact Qwen 2.5 Coder. Sweet spot for laptop autocomplete and small refactor agents.

License: Apache 2.0·Released Nov 12, 2024·Context: 32,768 tokens

Overview

Compact Qwen 2.5 Coder. Sweet spot for laptop autocomplete and small refactor agents.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Strengths

  • Apache 2.0
  • Laptop-friendly

Weaknesses

  • Limited reasoning depth vs 7B+

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M1.9 GB4 GB

Get the model

HuggingFace

Original weights

huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Qwen 2.5 Coder 3B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Step down
Smaller — faster, runs on weaker hardware

Frequently asked

What's the minimum VRAM to run Qwen 2.5 Coder 3B?

4GB of VRAM is enough to run Qwen 2.5 Coder 3B at the Q4_K_M quantization (file size 1.9 GB). Higher-quality quantizations need more.

Can I use Qwen 2.5 Coder 3B commercially?

Yes — Qwen 2.5 Coder 3B ships under the Apache 2.0, which permits commercial use. Always read the license text before deployment.

What's the context length of Qwen 2.5 Coder 3B?

Qwen 2.5 Coder 3B supports a context window of 32,768 tokens (about 33K).

Source: huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.