← Back to comparison
NVIDIA
NVIDIA GeForce RTX 3080 10GB
Ampere · RTX 30 Series · Discontinued (used market)
We earn commissions from purchases made through links on this site. This doesn't affect our rankings or recommendations.
Prices updated 5 days ago
Specifications
| VRAM | 10 GB |
| Memory type | GDDR6X |
| Bus width | 320-bit |
| Memory bandwidth | 760 GB/s |
| CUDA cores | 8,704 |
| Tensor cores | 272 |
| FP16 | 59.6 TFLOPS |
| TDP | 320W |
| Power connector | 2×8-pin |
| Card length | 285 mm |
| Slot width | 2 slots |
| PCIe | Gen 4 x16 |
| CUDA compute | 8.6 |
| Max model (Q4) | ~16B parameters |
Inference Benchmarks (Q4_K_M)
Llama 3.3 8B
72.0 tok/s
Qwen 3 32B
—
Llama 3.3 70B
—
llama.cpp, batch_size=1, ctx=4096, single GPU.
What Can You Run?
| Model | Q4_K_M | Q8_0 | FP16 |
|---|---|---|---|
| Llama 3.3 8B8B | Excellent~72 tok/s | Won't fit | Won't fit |
| Llama 3.3 70B70.6B | Won't fit | Won't fit | Won't fit |
| Qwen 3 8B8.2B | Usable | Won't fit | Won't fit |
| Qwen 3 32B32.8B | Won't fit | Won't fit | Won't fit |
| DeepSeek R1 70B70.6B | Won't fit | Won't fit | Won't fit |
| Mistral Nemo 12B12.2B | Usable | Won't fit | Won't fit |
| Phi-4 14B14B | Won't fit | Won't fit | Won't fit |
| Gemma 3 27B27.4B | Won't fit | Won't fit | Won't fit |
| Codestral 25B25.3B | Won't fit | Won't fit | Won't fit |
| Command R 35B35B | Won't fit | Won't fit | Won't fit |
Notes
10GB limits model size. Consider 3090 for larger models.