RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Glossary / Evaluation metrics / Tokens per second
Evaluation metrics

Tokens per second

Tokens per second (tok/s) is the most-cited LLM throughput metric, but it's also the most-misunderstood. It splits into two distinct phases: prefill (processing the input prompt — typically 100-1000+ tok/s on modern hardware) and decode (generating output tokens — typically 10-200 tok/s). When operators say "tok/s," they usually mean decode tok/s, which is the user-visible streaming speed.

What tok/s doesn't tell you: TTFT (time to first token), context-degradation behavior (how much does throughput drop at 32K vs 1K context?), concurrency scaling (does throughput hold at 8 concurrent users?), thermal-throttle curves (does sustained-load tok/s match cold-boot tok/s?). A model rated at "60 tok/s on RTX 4090" could mean any of these depending on prompt length, batch size, quant, runtime, and system load.

Operator discipline: when you read a tok/s benchmark, ask: (1) is it measured or estimated? (2) what's the prompt + output length? (3) what's the batch size + concurrency? (4) what runtime + quant + flash-attention version? Without these, the number is a vibe, not a measurement. RunLocalAI's benchmark queue tracks pending measurements with full provenance fields; published benchmarks carry confidence labels (high/medium/low/unverified).

Related terms

KV CacheLatencyThroughputTime to first token (TTFT)

See also

tool: vllmtool: llama-cpptool: ollama

Reviewed by Fredoline Eruo. See our editorial policy.

Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →