mistral
24B parameters
Restricted

Mistral Medium 3 24B (dense)

Dense variant in the Mistral Medium 3.5 family. Research license — non-commercial open. Same training data as the MoE flagship but in a smaller dense package.

License: Mistral Research License·Released Apr 29, 2026·Context: 262,144 tokens

Overview

Dense variant in the Mistral Medium 3.5 family. Research license — non-commercial open. Same training data as the MoE flagship but in a smaller dense package.

Strengths

  • Mistral instruction-following at dense 24B

Weaknesses

  • Research license blocks commercial use

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M14.0 GB18 GB

Get the model

HuggingFace

Original weights

huggingface.co/mistralai/Mistral-Medium-3-24B-dense

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Mistral Medium 3 24B (dense).

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run Mistral Medium 3 24B (dense)?

18GB of VRAM is enough to run Mistral Medium 3 24B (dense) at the Q4_K_M quantization (file size 14.0 GB). Higher-quality quantizations need more.

Can I use Mistral Medium 3 24B (dense) commercially?

Mistral Medium 3 24B (dense) is released under the Mistral Research License, which has restrictions for commercial use. Review the license terms before using it in a product.

What's the context length of Mistral Medium 3 24B (dense)?

Mistral Medium 3 24B (dense) supports a context window of 262,144 tokens (about 262K).

Source: huggingface.co/mistralai/Mistral-Medium-3-24B-dense

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.