gemma
27B parameters
Restricted
Multimodal

MedGemma 27B

Medical-specialist Gemma fine-tune. Trained on de-identified medical literature and imaging. Research use under HAI-DEF terms.

License: Gemma Terms of Use (Health AI Developer Foundations)·Released May 20, 2025·Context: 131,072 tokens

Overview

Medical-specialist Gemma fine-tune. Trained on de-identified medical literature and imaging. Research use under HAI-DEF terms.

Strengths

  • Medical-domain accuracy

Weaknesses

  • Not for clinical decisions
  • License restrictions

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M16.0 GB20 GB

Get the model

HuggingFace

Original weights

huggingface.co/google/medgemma-27b-it

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of MedGemma 27B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run MedGemma 27B?

20GB of VRAM is enough to run MedGemma 27B at the Q4_K_M quantization (file size 16.0 GB). Higher-quality quantizations need more.

Can I use MedGemma 27B commercially?

MedGemma 27B is released under the Gemma Terms of Use (Health AI Developer Foundations), which has restrictions for commercial use. Review the license terms before using it in a product.

What's the context length of MedGemma 27B?

MedGemma 27B supports a context window of 131,072 tokens (about 131K).

Does MedGemma 27B support images?

Yes — MedGemma 27B is multimodal and accepts text + vision inputs. Vision support requires a runner that handles its image-conditioning architecture.

Source: huggingface.co/google/medgemma-27b-it

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.