RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Models
  4. /Dolphin 3 Llama 3.3 70B
dolphin
70B parameters
Commercial OK
·Reviewed May 2026

Dolphin 3 Llama 3.3 70B

Eric Hartford's Dolphin 3 at 70B Llama 3.3 base. Less-restricted alternative for creative / unconstrained workflows.

License: Llama Community License·Released Sep 30, 2025·Context: 131,072 tokens

Overview

Eric Hartford's Dolphin 3 at 70B Llama 3.3 base. Less-restricted alternative for creative / unconstrained workflows.

How to run it

Dolphin 3 Llama 3.3 70B is Cognitive Computations' uncensored fine-tune of Llama 3.3 70B. Designed to remove alignment guardrails and respond to any prompt that base Llama would refuse. Run at Q4_K_M via Ollama (ollama pull dolphin3:70b) or llama.cpp with -ngl 999 -fa -c 8192. Q4_K_M file size ~40 GB on disk. Minimum VRAM: 48 GB — RTX A6000 (48GB) at Q4_K_M for 4K context. RTX 4090 24GB: Q3_K_M with KV offload. Recommended: A100 80GB at AWQ-INT4. Throughput: ~15-25 tok/s on A6000 at Q4_K_M. Standard Llama 3.3 architecture — full compatibility. Dolphin's key characteristic: removed refusal training (will not say "as an AI I cannot..."). Use case: content generation without alignment filters, adversarial testing, creative writing with few constraints. Tradeoff: less guardrailed than standard Llama — may produce harmful content if prompted. License: Dolphin is typically Apache 2.0 but verify on hf. Context: Llama 3.3's 128K (practical 4-8K on 48 GB).

Hardware guidance

Minimum: RTX 3090 24GB at Q3_K_M (4K). Recommended: RTX A6000 48GB at Q4_K_M (8K). Optimal: A100 80GB at AWQ-INT4. VRAM math: identical to Llama 3.3 70B — 70B dense at Q4 ≈ 40 GB. KV cache at 8K: ~10 GB. Total ~50 GB. A6000 48GB: borderline. RTX 4090 24GB: Q3 ≈ 30 GB. Dual RTX 4090 48 GB: Q4 at 8K. Mac Studio M4 Max 64GB: Q4 at 5-10 tok/s. Cloud: A100 80GB at $5-10/hr. AWQ-INT4 enables 32K context. Hardware requirements are identical to base Llama 3.3 70B — the fine-tune doesn't change architecture or size.

What breaks first

  1. Unfiltered outputs. Dolphin will comply with harmful, toxic, or illegal requests that base Llama refuses. This is the point of "uncensored" — but it means you're responsible for output filtering in your application. 2. Quality regression on refused topics. Removing refusals can accidentally degrade quality on the topics base Llama was aligned to refuse — the model may produce lower-quality responses instead of refusing. 3. System prompt bypass. Standard system prompt safety instructions are less effective on Dolphin. If you want guardrails, implement them in your application layer, not the system prompt. 4. Abliteration artifacts. The uncensoring process (abliteration/RLHF removal) may introduce artifacts — repetition, logical inconsistencies, or degraded coherence on specific prompt types. Test your use case.

Runtime recommendation

Ollama for quick-start (Dolphin is commonly in Ollama's catalog). llama.cpp for production. Standard Llama 3.3 stack applies — no special runtime required. Dolphin's only change is weight fine-tuning, not architecture.

Common beginner mistakes

Mistake: Deploying Dolphin in a customer-facing chatbot without output filtering. Fix: Dolphin has no refusal training. Implement content filtering in your application layer. The model will comply with harmful prompts. Mistake: Assuming Dolphin is "better" at all tasks because it's uncensored. Fix: Uncensored ≠ higher quality. Dolphin may have lower quality on standard benchmarks due to abliteration artifacts. Use base Llama 3.3 for most production tasks. Mistake: Using system prompt guardrails and expecting them to work. Fix: Dolphin's refusal mechanisms are removed. System prompt safety instructions are largely ignored. Don't rely on them. Mistake: Mixing Dolphin quant files with standard Llama 3.3 70B quant files. Fix: Dolphin is a fine-tune — the weights differ. Use Dolphin-specific GGUF files.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Parent / base model
Llama 3.3 70B Instruct70B
Datacenter
Family siblings (dolphin-3)
Dolphin 3.0 Llama 3.2 3B3B
Edge
Dolphin 3.0 Mistral 24B24B
Consumer
Dolphin 3 Llama 3.3 70B70B
You are here

Strengths

  • Less censored than base Llama

Weaknesses

  • Smaller community than base Llama 3.3

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
AWQ-INT440.0 GB48 GB

Get the model

HuggingFace

Original weights

huggingface.co/cognitivecomputations/Dolphin3-Llama3.3-70B

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Dolphin 3 Llama 3.3 70B.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd
AMD Instinct MI325X
256GB · amd
AMD Instinct MI300X
192GB · amd
NVIDIA B200
192GB · nvidia
NVIDIA H100 NVL
188GB · nvidia
NVIDIA H200
141GB · nvidia
AMD Instinct MI250X
128GB · amd

Frequently asked

What's the minimum VRAM to run Dolphin 3 Llama 3.3 70B?

48GB of VRAM is enough to run Dolphin 3 Llama 3.3 70B at the AWQ-INT4 quantization (file size 40.0 GB). Higher-quality quantizations need more.

Can I use Dolphin 3 Llama 3.3 70B commercially?

Yes — Dolphin 3 Llama 3.3 70B ships under the Llama Community License, which permits commercial use. Always read the license text before deployment.

What's the context length of Dolphin 3 Llama 3.3 70B?

Dolphin 3 Llama 3.3 70B supports a context window of 131,072 tokens (about 131K).

Source: huggingface.co/cognitivecomputations/Dolphin3-Llama3.3-70B

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • Dual 3090 vs RTX 5090 (48 GB or 32 GB) →
  • RTX 3090 vs RTX 4090 →
Buyer guides
  • 16 GB vs 24 GB VRAM — what 70B-class models need →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
  • AMD Instinct MI325X →
  • AMD Instinct MI300X →
  • NVIDIA B200 →
Alternatives
Dolphin 3.0 Mistral 24BDolphin 3.0 Llama 3.2 3B
Before you buy

Verify Dolphin 3 Llama 3.3 70B runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • Llama 3.3 70B Instruct
    llama · 70B
    9.1/10
  • DeepSeek R1 Distill Llama 70B
    deepseek · 70B
    9.0/10
  • Qwen 2.5 72B Instruct
    qwen · 72B
    9.0/10
  • Llama 3.1 70B Instruct
    llama · 70B
    8.0/10
Step up
More capable — bigger memory footprint
  • DeepSeek V4 Pro (1.6T MoE)
    deepseek · 1600B
    unrated
  • Qwen 3.5 235B-A17B (MoE)
    qwen · 397B
    unrated
Step down
Smaller — faster, runs on weaker hardware
  • Qwen 3 30B-A3B
    qwen · 30B
    unrated
  • Gemma 4 31B Dense
    gemma · 31B
    unrated