NVIDIA GeForce GTX 1080
Pascal flagship for two years. 8 GB GDDR5X at 320 GB/s — better bandwidth than the 1070. Runs 7B Q4 at ~30-45 tok/s; 13B Q4 with offload but slow. The pre-Turing flagship that still has a meaningful used-market presence.
Extrapolated from 320 GB/s bandwidth — 38.4 tok/s estimated. No measured benchmarks yet.
Plain-English: Comfortable for 7B chat.
Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.
This card is for the operator building a budget local AI rig who needs CUDA and can work within 8 GB VRAM. It's not for running large models or production workloads.
On 7B Q4 models, the GTX 1080 delivers ~30-45 tok/s, which is usable for chat and code completion. 13B Q4 models require offloading to system RAM, dropping to ~5-10 tok/s — barely interactive. The 320 GB/s bandwidth keeps small models snappy.
8 GB VRAM is the hard ceiling. Anything above 7B Q4 or 13B Q3_K_M will not fit. No support for FP8 or Tensor Cores, so inference relies on CUDA cores. Flash Attention and other optimizations may be limited.
Pass on this card if you need to run 13B models entirely on GPU, or if you plan to use models with context windows beyond 8K tokens. Also skip if you want to experiment with the latest quantization formats that require Turing or newer.
At ~$180 used, this is a cheap entry point for local AI. It's a stopgap, not a long-term solution.
›Why this rating
The GTX 1080 offers decent performance for small models at a low used price, but its 8 GB VRAM and lack of modern features limit its usefulness for larger or more demanding workloads. It's a passable starter card, not a workhorse.
Overview
Pascal flagship for two years. 8 GB GDDR5X at 320 GB/s — better bandwidth than the 1070. Runs 7B Q4 at ~30-45 tok/s; 13B Q4 with offload but slow. The pre-Turing flagship that still has a meaningful used-market presence.
Search-fallback links. Editorial hasn't yet curated retailer URLs for this card. Approx. $180.
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Specs
| VRAM | 8 GB |
| Power draw | 180 W |
| Released | 2016 |
| MSRP | $599 |
| Backends | CUDA Vulkan |
Models that fit
Open-weight models small enough to run on NVIDIA GeForce GTX 1080 with usable context.
Hardware worth comparing
Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.
Frequently asked
What models can NVIDIA GeForce GTX 1080 run?
Does NVIDIA GeForce GTX 1080 support CUDA?
How much does NVIDIA GeForce GTX 1080 cost?
Where next?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.