EVA Llama 3.3 70B
EVA community's storytelling-focused fine-tune of Llama 3.3 70B. Popular in the creative-writing / roleplay community.
Overview
EVA community's storytelling-focused fine-tune of Llama 3.3 70B. Popular in the creative-writing / roleplay community.
How to run it
EVA Llama 3.3 70B is a fine-tune of Llama 3.3 70B. EVA (Excellence in Virtual Agents) is a roleplay/conversational fine-tune designed for natural, engaging, persona-driven dialogue. Run at Q4_K_M via Ollama (ollama pull eva-llama3.3:70b) or llama.cpp with -ngl 999 -fa -c 4096. Q4_K_M file size ~40 GB on disk. Minimum VRAM: 48 GB — RTX A6000 (48GB) at Q4_K_M for 4K context. RTX 4090 24GB: Q3_K_M with KV offload. Throughput: ~15-25 tok/s on A6000 at Q4_K_M. Standard Llama 3.3 architecture. EVA's focus: character consistency, emotional tone, conversational flow — prioritized over factual accuracy or code generation. Use for: roleplay, character simulation, creative dialogue, interactive fiction. Not for: factual Q&A, coding, math, agent tasks. The roleplay tuning may make the model verbose and stylized — expect longer, more emotionally expressive outputs than base Llama 3.3. License: verify on hf (may be non-commercial for EVA models). Context: Llama 3.3 128K (practical 4-8K on 48 GB). Conversational contexts are typically shorter (2-4K) — less KV cache pressure.
Hardware guidance
Minimum: RTX 3090 24GB at Q3_K_M (4K). Recommended: RTX A6000 48GB at Q4_K_M (8K). VRAM math: identical to Llama 3.3 70B — 70B at Q4 ≈ 40 GB. KV cache at 8K: ~10 GB. Total ~50 GB. A6000 48GB: borderline. Dual RTX 4090 48 GB: Q4 at 8K. RTX 4090 24GB single: Q3 with KV offload. Mac Studio M4 Max 64GB: Q4 at 5-10 tok/s. Cloud: A100 80GB at $5-10/hr. Roleplay workloads are latency-sensitive — prioritize tok/s over context length. AWQ-INT4 on A100 gives fastest generation.
What breaks first
- Factual accuracy degradation. EVA's roleplay tuning prioritizes conversational engagement over factual precision. The model will confidently produce incorrect facts to maintain character consistency. 2. Out-of-character breaks. EVA may break character under adversarial prompts or complex logical demands. Character consistency degrades with long contexts (>4K tokens). 3. Repetition and looping. Roleplay-tuned models are prone to conversational loops — repeating phrases, circling back to earlier topics. Set repetition_penalty=1.1-1.15 and use stop sequences. 4. Q3 character degradation. Roleplay quality depends on nuanced language. At Q3, subtle emotional tones and character voice degrade more than factual content would. Use Q4_K_M minimum for character-based use.
Runtime recommendation
Common beginner mistakes
Mistake: Using EVA for factual Q&A or coding. Fix: EVA is roleplay-tuned. Factual accuracy is degraded compared to base Llama 3.3. Use Llama 3.3 70B or Qwen 3 72B for knowledge work. Mistake: Juxtaposing EVA outputs as factual. Fix: EVA prioritizes character consistency, not truth. Always fact-check its statements separately. Mistake: Setting temperature=0 and expecting creative dialogue. Fix: EVA needs temperature 0.7-0.9 for natural conversation. Temp=0 produces robotic, repetitive dialogue. Mistake: Expecting EVA to maintain character over 8K+ context. Fix: Character coherence degrades with long context. Keep conversations focused and under 4K tokens for best character consistency.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Strong creative-writing benchmarks
- Long-context narrative coherence
Weaknesses
- Smaller community than base Llama
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 40.0 GB | 48 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of EVA Llama 3.3 70B.
Frequently asked
What's the minimum VRAM to run EVA Llama 3.3 70B?
Can I use EVA Llama 3.3 70B commercially?
What's the context length of EVA Llama 3.3 70B?
Source: huggingface.co/EVA-UNIT-01/EVA-Llama-3.3-70B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify EVA Llama 3.3 70B runs on your specific hardware before committing money.