RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
  1. >
  2. Home
  3. /Models
  4. /EXAONE 3.5 32B
exaone
32B parameters
Restricted
·Reviewed May 2026

EXAONE 3.5 32B

LG AI Research's flagship Korean-ecosystem model. Strong on Korean/Japanese language tasks; competitive on English. License blocks commercial use without LG agreement.

License: EXAONE License·Released Nov 10, 2025·Context: 32,768 tokens

Overview

LG AI Research's flagship Korean-ecosystem model. Strong on Korean/Japanese language tasks; competitive on English. License blocks commercial use without LG agreement.

How to run it

EXAONE 3.5 32B is LG AI Research's 32B dense model. EXAONE is LG's model family optimized for Korean + English bilingual performance with competitive general reasoning. Run at Q4_K_M via Ollama (ollama pull exaone:32b) or llama.cpp with -ngl 999 -fa -c 8192. Q4_K_M file size ~18 GB on disk. Minimum VRAM: 16 GB — RTX 4080 (16GB) at Q4_K_M with KV offload. RTX 4090 24GB: Q4_K_M comfortably at 16K context. Recommended: RTX 4090 24GB at Q4_K_M. Throughput: ~35-55 tok/s on RTX 4090 at Q4_K_M. EXAONE architecture — compatible with standard inference stacks (llama.cpp support verified). EXAONE 3.5 is LG's most recent release — strong Korean performance, competitive English. Less benchmark coverage than Qwen/Mistral same-tier models, but quality is solid. Use for: Korean language tasks, bilingual (KO+EN) applications, general reasoning, coding. Not as strong on: non-Korean/English languages, niche domains. Context: 32K advertised; practical at Q4 on 24 GB is 16-32K. EXAONE is Apache 2.0 licensed — commercial-friendly.

Hardware guidance

Minimum: RTX 3060 12GB at Q3_K_M with KV offload. Recommended: RTX 4090 24GB at Q4_K_M (16K context). Optimal: RTX 5090 32GB at Q4_K_M (32K+ context). VRAM math: 32B dense, Q4_K_M ≈ 18 GB. KV cache at 16K: ~8 GB. Total: ~26 GB. RTX 4090 24GB: Q4 + 8-12K context fits on-GPU. 16K: offload KV. RTX 3090 24GB: same profile. RTX 4080 16GB: Q4 + 2K on-GPU. MacBook Pro M4 Pro 24GB+: Q4 at 10-20 tok/s. Cloud: A10 24GB at Q4_K_M. AWQ-INT4 drops to ~16 GB. EXAONE's 32B is hardware-efficient — one of the best Korean-capable models at consumer GPU size.

What breaks first

  1. Korean-centric tokenizer. EXAONE's tokenizer is optimized for Korean + English. Other languages (Japanese, Chinese, European) have higher token counts, reducing effective context. Test your language's token efficiency. 2. English quality vs Korean. English performance is competitive but lower than Korean. For English-only tasks, same-tier models like Qwen 3 32B may outperform. 3. LG licensing updates. LG may update EXAONE's license between versions. Verify Apache 2.0 status on the specific 3.5 release before commercial use. 4. Ecosystem maturity. EXAONE has less community quant coverage than Qwen/Mistral. Pre-converted GGUFs may be harder to find. Check bartowski/TheBloke for availability.

Runtime recommendation

llama.cpp for local use (verify EXAONE architecture support). Ollama if EXAONE tag exists. vLLM for serving. EXAONE uses standard transformer architecture — broad compatibility expected. For Korean-specific optimizations, LG may provide reference inference code.

Common beginner mistakes

Mistake: Assuming EXAONE matches Qwen/Mistral in English-only benchmarks. Fix: EXAONE is Korean-optimized. English quality is good but not best-in-class. Benchmark your English task against same-tier models before committing. Mistake: Using EXAONE for non-Korean/English languages. Fix: Tokenizer and training distribution are KO+EN heavy. Other languages underperform. Test your specific language. Mistake: Expecting broad GGUF availability. Fix: EXAONE has less community coverage. You may need to convert from hf yourself. Check bartowski's repo first. Mistake: Using Llama chat template. Fix: EXAONE uses LG's chat template. Verify on hf tokenizer_config.json. Using Llama template produces garbled Korean and awkward English.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Family siblings (exaone-3.5)
EXAONE 3.5 2.4B2.4B
Edge
EXAONE 3.5 8B7.8B
Consumer
EXAONE 3.5 32B32B
You are here
Distilled / fine-tuned from this
EXAONE 3.5 8B7.8B
Consumer

Strengths

  • Best open Korean-language model in May 2026
  • Strong CJK multilingual

Weaknesses

  • License blocks unrestricted commercial use

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
AWQ-INT419.0 GB22 GB

Get the model

HuggingFace

Original weights

huggingface.co/LGAI-EXAONE/EXAONE-3.5-32B-Instruct

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of EXAONE 3.5 32B.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd
AMD Instinct MI325X
256GB · amd
AMD Instinct MI300X
192GB · amd
NVIDIA B200
192GB · nvidia
NVIDIA H100 NVL
188GB · nvidia
NVIDIA H200
141GB · nvidia
Intel Gaudi 3
128GB · intel

Frequently asked

What's the minimum VRAM to run EXAONE 3.5 32B?

22GB of VRAM is enough to run EXAONE 3.5 32B at the AWQ-INT4 quantization (file size 19.0 GB). Higher-quality quantizations need more.

Can I use EXAONE 3.5 32B commercially?

EXAONE 3.5 32B is released under the EXAONE License, which has restrictions for commercial use. Review the license terms before using it in a product.

What's the context length of EXAONE 3.5 32B?

EXAONE 3.5 32B supports a context window of 32,768 tokens (about 33K).

Source: huggingface.co/LGAI-EXAONE/EXAONE-3.5-32B-Instruct

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • RTX 3090 vs RTX 5080 (24 vs 16 GB) →
  • Used 3090 vs 4090 →
Buyer guides
  • Best GPU for local AI — 32B-class models →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
  • Will it run on my hardware? →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
  • AMD Instinct MI325X →
  • AMD Instinct MI300X →
  • NVIDIA B200 →
Alternatives
EXAONE 3.5 8BEXAONE 3.5 2.4B
Before you buy

Verify EXAONE 3.5 32B runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • Qwen 3 30B-A3B
    qwen · 30B
    unrated
  • Gemma 4 31B Dense
    gemma · 31B
    unrated
  • Nemotron 3 Nano (30B-A3B)
    other · 30B
    unrated
  • DeepSeek Coder V3
    deepseek · 33B
    unrated
Step up
More capable — bigger memory footprint
  • Llama 3.3 70B Instruct
    llama · 70B
    9.1/10
  • DeepSeek R1 Distill Llama 70B
    deepseek · 70B
    9.0/10
Step down
Smaller — faster, runs on weaker hardware
  • DeepSeek V3 Lite (16B MoE)
    deepseek · 16B
    unrated
  • Mistral Small 3 24B
    mistral · 24B
    8.4/10