NVIDIA GeForce GTX 1080 Ti
Pascal halo card. 11 GB GDDR5X at 484 GB/s — outperforms many newer mid-range cards on raw bandwidth. Runs 7B Q4 at ~50-65 tok/s, 13B Q4 fits comfortably at ~25-35 tok/s. The legendary 'still relevant' card for AI on a budget; used $230-280 makes it the value flagship of 2026.
Extrapolated from 484 GB/s bandwidth — 58.1 tok/s estimated. No measured benchmarks yet.
Plain-English: Best for 7B; 14B is tight — coding agent feels deliberate; vision models supported.
Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.
This card is for the operator who needs a capable local inference rig on a strict budget, and is willing to accept older architecture limitations. The GTX 1080 Ti runs 7B Q4 models at ~50-65 tok/s and 13B Q4 at ~25-35 tok/s, making it a strong performer for chat and code completion workloads. Its 11 GB VRAM fits 13B Q4 comfortably, and even 30B Q3 models can be squeezed in with careful quantization. However, larger models like 34B or 70B are out of reach, and the lack of FP16 tensor cores means slower performance on mixed-precision inference. CUDA support is solid, but newer features like Flash Attention may not be fully optimized. Pass on this card if you need to run 70B+ models, require FP16 throughput for training, or want the latest software compatibility. At $250 used, it's the value champion for 7B-13B inference, but expect to upgrade when larger models become your daily driver.
›Why this rating
The GTX 1080 Ti offers exceptional price-to-performance for 7B-13B inference, with high bandwidth and sufficient VRAM for its era. It loses points for lack of modern features and inability to handle larger models, but remains a top budget pick.
Overview
Pascal halo card. 11 GB GDDR5X at 484 GB/s — outperforms many newer mid-range cards on raw bandwidth. Runs 7B Q4 at ~50-65 tok/s, 13B Q4 fits comfortably at ~25-35 tok/s. The legendary 'still relevant' card for AI on a budget; used $230-280 makes it the value flagship of 2026.
Search-fallback links. Editorial hasn't yet curated retailer URLs for this card. Approx. $250.
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Specs
| VRAM | 11 GB |
| Power draw | 250 W |
| Released | 2017 |
| MSRP | $699 |
| Backends | CUDA Vulkan |
Models that fit
Open-weight models small enough to run on NVIDIA GeForce GTX 1080 Ti with usable context.
Hardware worth comparing
Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.
Frequently asked
What models can NVIDIA GeForce GTX 1080 Ti run?
Does NVIDIA GeForce GTX 1080 Ti support CUDA?
How much does NVIDIA GeForce GTX 1080 Ti cost?
Where next?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.