MedGemma 27B
Medical-specialist Gemma fine-tune. Trained on de-identified medical literature and imaging. Research use under HAI-DEF terms.
Overview
Medical-specialist Gemma fine-tune. Trained on de-identified medical literature and imaging. Research use under HAI-DEF terms.
Strengths
- Medical-domain accuracy
Weaknesses
- Not for clinical decisions
- License restrictions
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 16.0 GB | 20 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of MedGemma 27B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run MedGemma 27B?
Can I use MedGemma 27B commercially?
What's the context length of MedGemma 27B?
Does MedGemma 27B support images?
Source: huggingface.co/google/medgemma-27b-it
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.