RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Families/Text & Reasoning/Mistral
Text & Reasoning
Mixed (open + closed variants)
Apache 2.0 (open) + Mistral Commercial (closed)

Mistral

by Mistral AI

Mistral's mixed open + closed family. Mistral 7B + Mixtral 8x22B are the open-weight standards; Mistral Large + Codestral are commercial. Codestral Mamba 7B introduced state-space models to production code workflows.

Best entry point for local use

Start with Mistral Small 22B at Q4_K_M via Ollama — fits on single RTX 4090 24 GB, delivers strong European-language token efficiency (32K sentencepiece BPE vocab) and competitive reasoning (MMLU 80%). The 22B sits at Mistral's optimal density-performance intersection — better than Llama 3.1 8B for multilingual tasks, more deployable than Mixtral's MoE overhead. If your VRAM budget is <12 GB, use Mistral 7B v0.3 Q4 (5 GB) — runs on MacBook Pro M4 Max at 30+ tok/s and remains competitive for general assistant workloads. Skip Mixtral 8x22B for first deployment — the MoE complexity adds serving overhead without proportional quality gains over the 22B dense. Skip Codestral Mamba unless you specifically need O(1) per-token long-context inference — the Mamba architecture has narrower runtime support.

Deployment guidance

For single-user local: Ollama + mistral:22b Q4_K_M on RTX 4090 24 GB. For multi-user serving: vLLM with AWQ 4-bit on 2× L40S — Mistral's sliding window attention (SWA) enables efficient KV-cache management at high concurrency. For Mixtral 8x7B MoE: vLLM 0.5.4+ on 2× RTX 4090 with expert parallelism — 12.9B active per token means VRAM requirement is ~30 GB Q4 for full model. For mobile/edge: llama.cpp Mistral 7B Q4_0 on Snapdragon X Elite — ~20 tok/s. For Codestral Mamba: mamba.c CUDA kernels required — standard transformer engines do not support Mamba architecture. Mistral models use European-optimized tokenizer — English throughput is ~12% lower token-for-token vs Llama. See GPU buyer guide (same GPU class applies).

Featured models

Models in this family with our verdicts

Codestral 22BCodestral Mamba 7B

Recommended runtimes

llama.cppvLLM

Related families

Llama

Related — keep moving

Compare hardware
  • RTX 3090 vs RTX 4090 →
  • RTX 4090 vs RTX 5090 →
Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
  • Will it run on my hardware? →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Runtimes that fit
  • llama.cpp →
  • vLLM →
Alternatives
Llama
Before you buy

Verify Mistral runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →