← Back to comparison
NVIDIA
NVIDIA GeForce RTX 5080
Blackwell · RTX 50 Series · ACTIVE
We earn commissions from purchases made through links on this site. This doesn't affect our rankings or recommendations.
Prices updated 5 days ago
Specifications
| VRAM | 16 GB |
| Memory type | GDDR7 |
| Bus width | 256-bit |
| Memory bandwidth | 960 GB/s |
| CUDA cores | 10,752 |
| Tensor cores | 336 |
| FP16 | 112.6 TFLOPS |
| FP8 | 225.2 TFLOPS |
| TDP | 360W |
| Power connector | 16-pin (12V-2×6) |
| Card length | 304 mm |
| Slot width | 2.5 slots |
| PCIe | Gen 5 x16 |
| CUDA compute | 10.0 |
| Max model (Q4) | ~28B parameters |
Inference Benchmarks (Q4_K_M)
Llama 3.3 8B
140.0 tok/s
Qwen 3 32B
35.0 tok/s*
Llama 3.3 70B
—
llama.cpp, batch_size=1, ctx=4096, single GPU. Values marked with * are estimated.
What Can You Run?
| Model | Q4_K_M | Q8_0 | FP16 |
|---|---|---|---|
| Llama 3.3 8B8B | Excellent~140 tok/s | Usable | Won't fit |
| Llama 3.3 70B70.6B | Won't fit | Won't fit | Won't fit |
| Qwen 3 8B8.2B | Usable | Usable | Won't fit |
| Qwen 3 32B32.8B | Won't fit~35 tok/s | Won't fit | Won't fit |
| DeepSeek R1 70B70.6B | Won't fit | Won't fit | Won't fit |
| Mistral Nemo 12B12.2B | Usable | Usable | Won't fit |
| Phi-4 14B14B | Usable | Won't fit | Won't fit |
| Gemma 3 27B27.4B | Won't fit | Won't fit | Won't fit |
| Codestral 25B25.3B | Won't fit | Won't fit | Won't fit |
| Command R 35B35B | Won't fit | Won't fit | Won't fit |