RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
← Home·Full form

>Submit a benchmark

Paste your runlocalai-bench terminal output. We'll auto-parse model, cold-start, median tok/s, P5/P95, variance, OS, runtime. You pick the hardware, optionally add a name. Submit in 30 seconds.

Editorial review still applies — same review queue as the full form. Submissions are queued, not auto-published.

Step 1 — Run the bench

On the machine with the model loaded. ~2 minutes for 5 runs.

# One-liner, no install:
curl -fsSL https://www.runlocalai.co/bench.mjs | node --model llama3.1:8b

# Or download + customize:
curl -fsSL https://www.runlocalai.co/bench.mjs -o bench.mjs
node bench.mjs --model qwen3:14b --prompt "Explain attention"

Want even less friction? Run with --submit and the bench CLI ships directly via API. Use this page when you want to review the values before they hit the queue.

Step 2 — Paste, pick hardware, submit

Paste the full terminal output below. Auto-detection lights up the fields. Pick the matching hardware. Hit submit.

Paste output, pick hardware + model to enable submit.

What gets stored

Your submission lands in the editorial review queue. Nothing auto-publishes. When approved, it appears on:

  • The model's detail page (/models/<slug>) as a measured benchmark
  • The hardware's detail page (/hardware/<slug>) and the leaderboard's confidence chip
  • The community benchmarks feed
  • The cost calculator and quant advisor — your measurement replaces our bandwidth-derived estimate for the exact model × hardware × quant combo you ran

Your IP is hashed daily (never stored raw). Email is optional and only used to contact you with reproduction questions. See privacy for the full handling policy.