internlm
7B parameters
Commercial OK
Reviewed May 2026InternLM 2.5 7B Chat
InternLM 2.5 mid-size chat. Apache 2.0; strong on math and Chinese.
License: Apache 2.0·Released Jul 3, 2024·Context: 1,048,576 tokens
Overview
InternLM 2.5 mid-size chat. Apache 2.0; strong on math and Chinese.
Strengths
- Apache 2.0
- Math + Chinese
Weaknesses
- Qwen 2.5 7B is the larger-community alternative
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 4.4 GB | 6 GB |
Get the model
HuggingFace
Original weights
huggingface.co/internlm/internlm2_5-7b-chat
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of InternLM 2.5 7B Chat.
Frequently asked
What's the minimum VRAM to run InternLM 2.5 7B Chat?
6GB of VRAM is enough to run InternLM 2.5 7B Chat at the Q4_K_M quantization (file size 4.4 GB). Higher-quality quantizations need more.
Can I use InternLM 2.5 7B Chat commercially?
Yes — InternLM 2.5 7B Chat ships under the Apache 2.0, which permits commercial use. Always read the license text before deployment.
What's the context length of InternLM 2.5 7B Chat?
InternLM 2.5 7B Chat supports a context window of 1,048,576 tokens (about 1049K).
Source: huggingface.co/internlm/internlm2_5-7b-chat
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Compare hardware
Buyer guides
Before you buy
Verify InternLM 2.5 7B Chat runs on your specific hardware before committing money.