Transformer & LLM components
Random Seed
A random seed initializes the pseudo-random generator that drives sampling at temperature > 0. Same seed + same prompt + same settings + same model + same runtime → same output.
Seed control is supported across most local-AI runtimes: llama.cpp's --seed, vLLM's seed= SamplingParams, Ollama's seed option in the request body. Setting -1 typically asks for a random seed each request.
Caveat from deterministic-decoding: even with a fixed seed, reproducibility across hardware or runtime versions is not guaranteed. Use seed for within-session reproducibility, not as a cross-system stability mechanism.
Related terms
Reviewed by Fredoline Eruo. See our editorial policy.