Qwen 2.5 Coder 32B Instruct on NVIDIA GeForce RTX 4090
Measured this month.
Measurement
- tok/s
- 38.2
- TTFT
- 142 ms
- VRAM used
- 21.8 GB
- RAM used
- 4.8 GB
- Power
- 372 W
- Quant
- AWQ-INT4
- Context
- 32K
- Run date
- 2026-05-03
- Source
- community
AWQ-INT4 quant fits 32B model + 32K context in 24 GB with 2 GB headroom. vLLM 0.17.1 with --enable-chunked-prefill and --gpu-memory-utilization 0.9. Prefix-cache hit lowers TTFT to ~30 ms on warm cache. The /stacks/local-coding-agent stack uses this exact configuration. Community-reported throughput is 35-42 tok/s depending on prompt shape; we cite a typical mid-prompt value. Higher than llama.cpp Q4_K_M on the same hardware (32 tok/s) because vLLM's continuous batching handles concurrent agent tool calls without queueing.
Why this confidence tier?
Confidence is rule-based. Every factor below contributed to the tier. We never expose a single numeric score; the tier label is auditable through this explanation alone.
- +Source: community submission
- Reproduce this benchmark →An independent reproduction with matching numbers lifts the tier and reduces single-source risk.
- Read the confidence methodology →Full editorial standards for tiering.
- Why we don't use percentages →Tier labels — auditable, no opaque score.
Cohort intelligence
How this measurement compares to the rest of the corpus. Only comparable rows (same model + hardware first, with relaxations labelled) are used. We never average across runtimes or quant formats unless explicitly told to.
Same model + hardware, different runtime
1 matching rowVariance here is pure runtime / version drift. Wide spread suggests a runtime regression candidate worth investigating.
- 38.2 tok/srtx-4090AWQ-INT4Editorial
Same hardware, different model
5 matching rowsWhat else this rig can run at the same quant bucket.
- 36.5 tok/srtx-4090AWQ-INT4Editorial
- 32.5 tok/srtx-4090AWQ-INT4Editorial
- 14.8 tok/srtx-4090Q4_K_MEditorial
- 8.0 tok/srtx-4090Q4_K_MEditorial
- 150.0 tok/srtx-4090Q4_K_MEditorial
Reproduce this benchmark
Got the same model + hardware combo? Run the same measurement and submit your numbers. We'll pre-fill model, hardware, quant, and context — you just add your tok/s, VRAM, runtime version. If your numbers match within ±15%, this benchmark gets a confidence lift and a reproduction badge.
Related
Drill into the entity pages for this measurement.
Cite or export
Reference this benchmark in your work. Multiple formats; CC-BY attribution required.
Cite this benchmark or paste it into a README. Copy-to-clipboard; license is CC-BY-4.0 (attribution to RunLocalAI required).
<a href="https://runlocalai.co/benchmarks/328" rel="noopener">RunLocalAI: Qwen 2.5 Coder 32B Instruct on NVIDIA GeForce RTX 4090 — 38.2 tok/s</a>
Next recommended step
Got the same model + hardware? Run it and submit your numbers — successful reproductions lift this benchmark's confidence tier.