rwkv
1.5B parameters
Commercial OK
RWKV 7 'Goose' 1.5B
RWKV 7 'Goose' at 1.5B. Linear-time inference architecture (constant memory regardless of context). Apache 2.0.
License: Apache 2.0·Released Feb 15, 2025·Context: 1,048,576 tokens
Overview
RWKV 7 'Goose' at 1.5B. Linear-time inference architecture (constant memory regardless of context). Apache 2.0.
Strengths
- Linear inference cost
- Constant memory at any context length
- Apache 2.0
Weaknesses
- Lower quality at any size class than transformer models
- Smaller ecosystem
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q5_K_M | 1.1 GB | 2 GB |
Get the model
HuggingFace
Original weights
huggingface.co/BlinkDL/rwkv-7-world
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of RWKV 7 'Goose' 1.5B.
Compare alternatives
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Same tier
Models in the same parameter band as this one
Step up
More capable — bigger memory footprint
Step down
Smaller — faster, runs on weaker hardware
No verdicted models in the next tier down yet.
Frequently asked
What's the minimum VRAM to run RWKV 7 'Goose' 1.5B?
2GB of VRAM is enough to run RWKV 7 'Goose' 1.5B at the Q5_K_M quantization (file size 1.1 GB). Higher-quality quantizations need more.
Can I use RWKV 7 'Goose' 1.5B commercially?
Yes — RWKV 7 'Goose' 1.5B ships under the Apache 2.0, which permits commercial use. Always read the license text before deployment.
What's the context length of RWKV 7 'Goose' 1.5B?
RWKV 7 'Goose' 1.5B supports a context window of 1,048,576 tokens (about 1049K).
Source: huggingface.co/BlinkDL/rwkv-7-world
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.