PaliGemma 2 3B
PaliGemma 2 — Gemma 2 base + SigLIP vision encoder. Designed for fine-tuning on specific vision tasks.
Overview
PaliGemma 2 — Gemma 2 base + SigLIP vision encoder. Designed for fine-tuning on specific vision tasks.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Designed for fine-tuning
- Multiple resolutions
Weaknesses
- Base — needs task-specific fine-tune to be useful
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| BF16 | 6.0 GB | 8 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of PaliGemma 2 3B.
Frequently asked
What's the minimum VRAM to run PaliGemma 2 3B?
Can I use PaliGemma 2 3B commercially?
What's the context length of PaliGemma 2 3B?
Does PaliGemma 2 3B support images?
Source: huggingface.co/google/paligemma2-3b-pt-224
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify PaliGemma 2 3B runs on your specific hardware before committing money.