RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
  1. >
  2. Home
  3. /Models
  4. /Aya 23 35B
other
35B parameters
Restricted
·Reviewed May 2026

Aya 23 35B

Aya 23 at 35B. Built on Cohere's Command-R lineage. Non-commercial.

License: CC-BY-NC 4.0·Released May 23, 2024·Context: 8,192 tokens
BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 8, 2026
unrated

Positioning

Cohere For AI's Aya 23 35B is the original Aya Expanse 32B precursor and one of the foundational open-weight multilingual research models. 35 billion parameters dense, instruction-tuned across 23 languages, released under CC-BY-NC-4.0. Aya 23 was built on the Command R 35B base with the Aya multilingual pretraining + instruction-tuning recipe — establishing the Cohere "balance over breadth" multilingual approach that Aya Expanse refined.

Strengths

  • Strong multilingual at the 35B tier. Best-in-class for the parameter count on 23 languages including Arabic, Korean, Japanese, Vietnamese, Turkish, Hebrew.
  • Same memory + inference profile as Command R 35B — fits 24 GB at Q4-Q5 (RTX 4090, used 3090) or 48 GB at FP16 (RTX 6000 Ada, L40S).
  • Conservative instruction-tuning — predictable behavior for production translation + multilingual chat.
  • Apache-style data transparency — Cohere documented the training data composition more openly than most frontier labs.

Limitations

  • Same CC-BY-NC-4.0 license constraint. Production commercial deployments require Cohere licensing.
  • Surpassed by Aya Expanse 32B. Aya Expanse is the architectural successor with refined instruction-tuning + slightly better multilingual depth.
  • Reasoning trails Llama 3.1 70B / Qwen 3 32B. The multilingual focus trades general capability for cross-language consistency.
  • English-only quality is below similar-size general-purpose models.
  • No long-context strength beyond 8K-16K.

Real-world performance

  • vs Aya Expanse 32B: Aya Expanse is the strict generational upgrade. Pick Aya Expanse for new deployments; Aya 23 35B only when you specifically need to match an existing Aya 23 deployment.
  • vs Command R 35B: Same 35B base; Command R is RAG-citation-tuned, Aya 23 is multilingual-tuned.
  • vs Llama 3.1 70B: Llama wins for English-only general tasks at larger param count + permissive license.
  • vs Aya 23 8B: 8B sibling at lower capability tier for cheaper inference.

Should you run this locally?

Yes if you have an existing Aya 23 deployment and need to match it for reproducibility, you specifically need 35B-class multilingual chat for research / non-commercial, and you're philosophically aligned with Cohere For AI's open multilingual research mission.

No if you're starting fresh — pick Aya Expanse 32B (architectural successor) or Command R 35B (RAG-citation focus). Both newer + same memory tier.

How it compares

  • vs Aya Expanse 32B: Strict upgrade.
  • vs Aya 23 8B: Smaller sibling.
  • vs Command R 35B: Same base, different specialization (RAG vs multilingual).
  • vs Qwen 3 32B: Qwen 3 stronger overall + permissive license; Aya 23 stronger multilingual.

Run this yourself

  • Single 24 GB GPU at Q4-Q5: RTX 4090, used 3090.
  • Single 48 GB at FP16: RTX 6000 Ada, L40S.
  • Apple Silicon at FP16: Mac Studio M3 Ultra or MacBook Pro M4 Max.
  • Vendor: CohereForAI/aya-23-35B on Hugging Face.

Overview

Aya 23 at 35B. Built on Cohere's Command-R lineage. Non-commercial.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Parent / base model
Aya 23 8B8B
Consumer
Family siblings (aya)
Aya 23 8B8B
Consumer
Aya Expanse 32B32B
Workstation
Aya 23 35B35B
You are here

Strengths

  • 23 languages at workstation tier

Weaknesses

  • Non-commercial license

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M21.0 GB24 GB

Get the model

HuggingFace

Original weights

huggingface.co/CohereForAI/aya-23-35B

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Aya 23 35B.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd
AMD Instinct MI325X
256GB · amd
AMD Instinct MI300X
192GB · amd
NVIDIA B200
192GB · nvidia
NVIDIA H100 NVL
188GB · nvidia
NVIDIA H200
141GB · nvidia
Intel Gaudi 3
128GB · intel

Frequently asked

What's the minimum VRAM to run Aya 23 35B?

24GB of VRAM is enough to run Aya 23 35B at the Q4_K_M quantization (file size 21.0 GB). Higher-quality quantizations need more.

Can I use Aya 23 35B commercially?

Aya 23 35B is released under the CC-BY-NC 4.0, which has restrictions for commercial use. Review the license terms before using it in a product.

What's the context length of Aya 23 35B?

Aya 23 35B supports a context window of 8,192 tokens (about 8K).

Source: huggingface.co/CohereForAI/aya-23-35B

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • RTX 3090 vs RTX 5080 (24 vs 16 GB) →
  • Used 3090 vs 4090 →
Buyer guides
  • Best GPU for local AI — 32B-class models →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
  • Will it run on my hardware? →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
  • AMD Instinct MI325X →
  • AMD Instinct MI300X →
  • NVIDIA B200 →
Alternatives
Aya 23 8BAya Expanse 32B
Before you buy

Verify Aya 23 35B runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • Qwen 3 30B-A3B
    qwen · 30B
    unrated
  • Gemma 4 31B Dense
    gemma · 31B
    unrated
  • Nemotron 3 Nano (30B-A3B)
    other · 30B
    unrated
  • DeepSeek Coder V3
    deepseek · 33B
    unrated
Step up
More capable — bigger memory footprint
  • Llama 3.3 70B Instruct
    llama · 70B
    9.1/10
  • DeepSeek R1 Distill Llama 70B
    deepseek · 70B
    9.0/10
Step down
Smaller — faster, runs on weaker hardware
  • DeepSeek V3 Lite (16B MoE)
    deepseek · 16B
    unrated
  • Mistral Small 3 24B
    mistral · 24B
    8.4/10