glm
13.9B parameters
Restricted
Multimodal

GLM-4V 9B

GLM-4 with vision encoder. Strong on Chinese document Q&A; restricted commercial license.

License: GLM License·Released Jun 4, 2024·Context: 8,192 tokens

Overview

GLM-4 with vision encoder. Strong on Chinese document Q&A; restricted commercial license.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Parent / base model
GLM-4 9B9B
Consumer

Strengths

  • Chinese document Q&A
  • Vision-capable GLM

Weaknesses

  • Restricted license

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M8.5 GB12 GB

Get the model

HuggingFace

Original weights

huggingface.co/THUDM/glm-4v-9b

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of GLM-4V 9B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run GLM-4V 9B?

12GB of VRAM is enough to run GLM-4V 9B at the Q4_K_M quantization (file size 8.5 GB). Higher-quality quantizations need more.

Can I use GLM-4V 9B commercially?

GLM-4V 9B is released under the GLM License, which has restrictions for commercial use. Review the license terms before using it in a product.

What's the context length of GLM-4V 9B?

GLM-4V 9B supports a context window of 8,192 tokens (about 8K).

Does GLM-4V 9B support images?

Yes — GLM-4V 9B is multimodal and accepts text + vision inputs. Vision support requires a runner that handles its image-conditioning architecture.

Source: huggingface.co/THUDM/glm-4v-9b

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.