← Back to comparison
NVIDIA
NVIDIA GeForce RTX 3090
Ampere · RTX 30 Series · Discontinued (used market)
We earn commissions from purchases made through links on this site. This doesn't affect our rankings or recommendations.
Prices updated 5 days ago
Specifications
| VRAM | 24 GB |
| Memory type | GDDR6X |
| Bus width | 384-bit |
| Memory bandwidth | 936 GB/s |
| CUDA cores | 10,496 |
| Tensor cores | 328 |
| FP16 | 71.2 TFLOPS |
| TDP | 350W |
| Power connector | 2×8-pin |
| Card length | 313 mm |
| Slot width | 3 slots |
| PCIe | Gen 4 x16 |
| CUDA compute | 8.6 |
| Max model (Q4) | ~44B parameters |
Inference Benchmarks (Q4_K_M)
Llama 3.3 8B
98.0 tok/s
Qwen 3 32B
35.0 tok/s
Llama 3.3 70B
8.0 tok/s
llama.cpp, batch_size=1, ctx=4096, single GPU.
What Can You Run?
| Model | Q4_K_M | Q8_0 | FP16 |
|---|---|---|---|
| Llama 3.3 8B8B | Excellent~98 tok/s | Usable | Usable |
| Llama 3.3 70B70.6B | Won't fit~8 tok/s | Won't fit | Won't fit |
| Qwen 3 8B8.2B | Usable | Usable | Usable |
| Qwen 3 32B32.8B | Good~35 tok/s | Won't fit | Won't fit |
| DeepSeek R1 70B70.6B | Won't fit | Won't fit | Won't fit |
| Mistral Nemo 12B12.2B | Usable | Usable | Won't fit |
| Phi-4 14B14B | Usable | Usable | Won't fit |
| Gemma 3 27B27.4B | Usable | Won't fit | Won't fit |
| Codestral 25B25.3B | Usable | Won't fit | Won't fit |
| Command R 35B35B | Usable | Won't fit | Won't fit |
Notes
The used market darling for local LLM. 24GB VRAM at used prices makes it an excellent value.