Gemma 3 27B
Pre-Gemma-4 flagship. Multimodal (4B+ variants), 128K context, 140 languages. Strong daily driver on 24GB cards.
Gemma 3 27B is Google's flagship open-weight in 2025 — natively multimodal, 128K context, distilled from Gemini-class data. Right pick when you want Google's training distribution + multimodal in a single model that fits on 24 GB VRAM.
Strengths- Native vision-language — single model, no separate adapter.
- 128K context with reasonable recall — better than Llama 3.1 8B's nominal 128K.
- Distillation from Gemini shows in writing quality and instruction polish.
- Gemma license is restrictive — terms more limiting than Apache or Llama; review for commercial use.
- Slightly weaker on hard reasoning than Qwen 3 32B at similar VRAM.
- No thinking-mode equivalent — single dense mode.
- Q4_K_M (16.5 GB): 60–75 tok/s decode, TTFT ~130 ms — full GPU
- Q5_K_M (19.4 GB): 50–62 tok/s
- Q8_0 (29 GB): partial offload, 18–26 tok/s
Yes, for users who want native multimodal + Google's training distribution + 24 GB single-card runtime. No, for users sensitive to license terms (Apache options exist) or who prioritize raw reasoning ceiling (Qwen 3 32B).
How it compares- vs Qwen 3 32B → Qwen wins on reasoning + license; Gemma wins on multimodality + writing polish. Pick by job.
- vs Mistral Small 3 24B → Mistral wins on license simplicity; Gemma wins on multimodality.
- vs Gemma 3 12B → 27B is materially smarter; pick 27B if VRAM allows.
- vs Llama 3.3 70B → Llama 3.3 70B is smarter but ~3× slower on a 4090; Gemma 3 27B is the productivity pick at this VRAM.
ollama pull gemma3:27b-it-q4_K_M
ollama run gemma3:27b-it-q4_K_M
Settings: Q4_K_M GGUF, 16384 ctx, full GPU on RTX 4090
›Why this rating
8.2/10 — Google's 27B is a credible alternative in the dense mid-tier with native multimodal and a 128K context. Loses points to Qwen 3 32B (slightly smaller, slightly stronger) and Mistral Small 3 24B (cleaner license).
Overview
Pre-Gemma-4 flagship. Multimodal (4B+ variants), 128K context, 140 languages. Strong daily driver on 24GB cards.
Strengths
- Multimodal
- Multilingual
- 128K context
Weaknesses
- Superseded by Gemma 4
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 16.0 GB | 20 GB |
| Q8_0 | 29.0 GB | 34 GB |
Get the model
Ollama
One-line install
ollama run gemma3:27bRead our Ollama review →HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Gemma 3 27B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Gemma 3 27B?
Can I use Gemma 3 27B commercially?
What's the context length of Gemma 3 27B?
How do I install Gemma 3 27B with Ollama?
Does Gemma 3 27B support images?
Source: huggingface.co/google/gemma-3-27b-it
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.