gemma
27B parameters
Commercial OK
Multimodal

Gemma 3 27B

Pre-Gemma-4 flagship. Multimodal (4B+ variants), 128K context, 140 languages. Strong daily driver on 24GB cards.

License: Gemma Terms of Use·Released Mar 12, 2025·Context: 131,072 tokens
Our verdict
By Fredoline Eruo·Last verified May 6, 2026
8.2/10
Positioning

Gemma 3 27B is Google's flagship open-weight in 2025 — natively multimodal, 128K context, distilled from Gemini-class data. Right pick when you want Google's training distribution + multimodal in a single model that fits on 24 GB VRAM.

Strengths
  • Native vision-language — single model, no separate adapter.
  • 128K context with reasonable recall — better than Llama 3.1 8B's nominal 128K.
  • Distillation from Gemini shows in writing quality and instruction polish.
Limitations
  • Gemma license is restrictive — terms more limiting than Apache or Llama; review for commercial use.
  • Slightly weaker on hard reasoning than Qwen 3 32B at similar VRAM.
  • No thinking-mode equivalent — single dense mode.
Real-world performance on RTX 4090
  • Q4_K_M (16.5 GB): 60–75 tok/s decode, TTFT ~130 ms — full GPU
  • Q5_K_M (19.4 GB): 50–62 tok/s
  • Q8_0 (29 GB): partial offload, 18–26 tok/s
Should you run this locally?

Yes, for users who want native multimodal + Google's training distribution + 24 GB single-card runtime. No, for users sensitive to license terms (Apache options exist) or who prioritize raw reasoning ceiling (Qwen 3 32B).

How it compares
  • vs Qwen 3 32B → Qwen wins on reasoning + license; Gemma wins on multimodality + writing polish. Pick by job.
  • vs Mistral Small 3 24B → Mistral wins on license simplicity; Gemma wins on multimodality.
  • vs Gemma 3 12B → 27B is materially smarter; pick 27B if VRAM allows.
  • vs Llama 3.3 70B → Llama 3.3 70B is smarter but ~3× slower on a 4090; Gemma 3 27B is the productivity pick at this VRAM.
Run this yourself
ollama pull gemma3:27b-it-q4_K_M
ollama run gemma3:27b-it-q4_K_M
Settings: Q4_K_M GGUF, 16384 ctx, full GPU on RTX 4090
Why this rating

8.2/10 — Google's 27B is a credible alternative in the dense mid-tier with native multimodal and a 128K context. Loses points to Qwen 3 32B (slightly smaller, slightly stronger) and Mistral Small 3 24B (cleaner license).

Overview

Pre-Gemma-4 flagship. Multimodal (4B+ variants), 128K context, 140 languages. Strong daily driver on 24GB cards.

Strengths

  • Multimodal
  • Multilingual
  • 128K context

Weaknesses

  • Superseded by Gemma 4

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M16.0 GB20 GB
Q8_029.0 GB34 GB

Get the model

Ollama

One-line install

ollama run gemma3:27bRead our Ollama review →

HuggingFace

Original weights

huggingface.co/google/gemma-3-27b-it

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Gemma 3 27B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run Gemma 3 27B?

20GB of VRAM is enough to run Gemma 3 27B at the Q4_K_M quantization (file size 16.0 GB). Higher-quality quantizations need more.

Can I use Gemma 3 27B commercially?

Yes — Gemma 3 27B ships under the Gemma Terms of Use, which permits commercial use. Always read the license text before deployment.

What's the context length of Gemma 3 27B?

Gemma 3 27B supports a context window of 131,072 tokens (about 131K).

How do I install Gemma 3 27B with Ollama?

Run `ollama pull gemma3:27b` to download, then `ollama run gemma3:27b` to start a chat session. The default quantization is Q4_K_M.

Does Gemma 3 27B support images?

Yes — Gemma 3 27B is multimodal and accepts text + vision inputs. Vision support requires a runner that handles its image-conditioning architecture.

Source: huggingface.co/google/gemma-3-27b-it

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.