Aya 23 35B
Aya 23 at 35B. Built on Cohere's Command-R lineage. Non-commercial.
Positioning
Cohere For AI's Aya 23 35B is the original Aya Expanse 32B precursor and one of the foundational open-weight multilingual research models. 35 billion parameters dense, instruction-tuned across 23 languages, released under CC-BY-NC-4.0. Aya 23 was built on the Command R 35B base with the Aya multilingual pretraining + instruction-tuning recipe — establishing the Cohere "balance over breadth" multilingual approach that Aya Expanse refined.
Strengths
- Strong multilingual at the 35B tier. Best-in-class for the parameter count on 23 languages including Arabic, Korean, Japanese, Vietnamese, Turkish, Hebrew.
- Same memory + inference profile as Command R 35B — fits 24 GB at Q4-Q5 (RTX 4090, used 3090) or 48 GB at FP16 (RTX 6000 Ada, L40S).
- Conservative instruction-tuning — predictable behavior for production translation + multilingual chat.
- Apache-style data transparency — Cohere documented the training data composition more openly than most frontier labs.
Limitations
- Same CC-BY-NC-4.0 license constraint. Production commercial deployments require Cohere licensing.
- Surpassed by Aya Expanse 32B. Aya Expanse is the architectural successor with refined instruction-tuning + slightly better multilingual depth.
- Reasoning trails Llama 3.1 70B / Qwen 3 32B. The multilingual focus trades general capability for cross-language consistency.
- English-only quality is below similar-size general-purpose models.
- No long-context strength beyond 8K-16K.
Real-world performance
- vs Aya Expanse 32B: Aya Expanse is the strict generational upgrade. Pick Aya Expanse for new deployments; Aya 23 35B only when you specifically need to match an existing Aya 23 deployment.
- vs Command R 35B: Same 35B base; Command R is RAG-citation-tuned, Aya 23 is multilingual-tuned.
- vs Llama 3.1 70B: Llama wins for English-only general tasks at larger param count + permissive license.
- vs Aya 23 8B: 8B sibling at lower capability tier for cheaper inference.
Should you run this locally?
Yes if you have an existing Aya 23 deployment and need to match it for reproducibility, you specifically need 35B-class multilingual chat for research / non-commercial, and you're philosophically aligned with Cohere For AI's open multilingual research mission.
No if you're starting fresh — pick Aya Expanse 32B (architectural successor) or Command R 35B (RAG-citation focus). Both newer + same memory tier.
How it compares
- vs Aya Expanse 32B: Strict upgrade.
- vs Aya 23 8B: Smaller sibling.
- vs Command R 35B: Same base, different specialization (RAG vs multilingual).
- vs Qwen 3 32B: Qwen 3 stronger overall + permissive license; Aya 23 stronger multilingual.
Run this yourself
- Single 24 GB GPU at Q4-Q5: RTX 4090, used 3090.
- Single 48 GB at FP16: RTX 6000 Ada, L40S.
- Apple Silicon at FP16: Mac Studio M3 Ultra or MacBook Pro M4 Max.
- Vendor: CohereForAI/aya-23-35B on Hugging Face.
Overview
Aya 23 at 35B. Built on Cohere's Command-R lineage. Non-commercial.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- 23 languages at workstation tier
Weaknesses
- Non-commercial license
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 21.0 GB | 24 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Aya 23 35B.
Frequently asked
What's the minimum VRAM to run Aya 23 35B?
Can I use Aya 23 35B commercially?
What's the context length of Aya 23 35B?
Source: huggingface.co/CohereForAI/aya-23-35B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Aya 23 35B runs on your specific hardware before committing money.