hermes
70B parameters
Commercial OK

Hermes 3 Llama 3.1 70B

Hermes 3 at 70B. Workstation-tier agent-tuned model.

License: Llama 3.1 Community License·Released Aug 15, 2024·Context: 131,072 tokens

Overview

Hermes 3 at 70B. Workstation-tier agent-tuned model.

Strengths

  • System-prompt steering
  • Agent-tuned

Weaknesses

  • 48GB+ VRAM

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M40.0 GB48 GB

Get the model

Ollama

One-line install

ollama run hermes3:70bRead our Ollama review →

HuggingFace

Original weights

huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Hermes 3 Llama 3.1 70B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run Hermes 3 Llama 3.1 70B?

48GB of VRAM is enough to run Hermes 3 Llama 3.1 70B at the Q4_K_M quantization (file size 40.0 GB). Higher-quality quantizations need more.

Can I use Hermes 3 Llama 3.1 70B commercially?

Yes — Hermes 3 Llama 3.1 70B ships under the Llama 3.1 Community License, which permits commercial use. Always read the license text before deployment.

What's the context length of Hermes 3 Llama 3.1 70B?

Hermes 3 Llama 3.1 70B supports a context window of 131,072 tokens (about 131K).

How do I install Hermes 3 Llama 3.1 70B with Ollama?

Run `ollama pull hermes3:70b` to download, then `ollama run hermes3:70b` to start a chat session. The default quantization is Q4_K_M.

Source: huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.