Evaluation metrics
GSM8K
GSM8K is a benchmark of 8,500 grade-school math word problems requiring 2–8 reasoning steps. Models are scored by whether the final numeric answer matches the ground truth. Designed by OpenAI in 2021 to test multi-step arithmetic reasoning.
Long since saturated by frontier models (>95%) but still a useful local-AI sanity check: a quantization that drops GSM8K by 5+ points has lost reasoning fidelity, even if perplexity barely moved.
Common gotchas: chain-of-thought prompting boosts GSM8K dramatically (often +20 points), so benchmark numbers are only comparable when the prompting strategy matches.
Related terms
Reviewed by Fredoline Eruo. See our editorial policy.