gemma
26B parameters
Commercial OK
Multimodal
Gemma 4 26B MoE
MoE variant of Gemma 4. Faster per-token than the 31B dense at similar quality on most tasks.
License: Gemma Terms of Use·Released Apr 2, 2026·Context: 131,072 tokens
Overview
MoE variant of Gemma 4. Faster per-token than the 31B dense at similar quality on most tasks.
Strengths
- MoE speed advantage
- Multilingual
- Multimodal
Weaknesses
- MoE routing variance
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 16.0 GB | 20 GB |
Get the model
Ollama
One-line install
ollama run gemma4:26b-moeRead our Ollama review →HuggingFace
Original weights
huggingface.co/google/gemma-4-26b-moe-it
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Gemma 4 26B MoE.
Compare alternatives
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Same tier
Models in the same parameter band as this one
Step up
More capable — bigger memory footprint
Step down
Smaller — faster, runs on weaker hardware
Frequently asked
What's the minimum VRAM to run Gemma 4 26B MoE?
20GB of VRAM is enough to run Gemma 4 26B MoE at the Q4_K_M quantization (file size 16.0 GB). Higher-quality quantizations need more.
Can I use Gemma 4 26B MoE commercially?
Yes — Gemma 4 26B MoE ships under the Gemma Terms of Use, which permits commercial use. Always read the license text before deployment.
What's the context length of Gemma 4 26B MoE?
Gemma 4 26B MoE supports a context window of 131,072 tokens (about 131K).
How do I install Gemma 4 26B MoE with Ollama?
Run `ollama pull gemma4:26b-moe` to download, then `ollama run gemma4:26b-moe` to start a chat session. The default quantization is Q4_K_M.
Does Gemma 4 26B MoE support images?
Yes — Gemma 4 26B MoE is multimodal and accepts text + vision inputs. Vision support requires a runner that handles its image-conditioning architecture.
Source: huggingface.co/google/gemma-4-26b-moe-it
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.