← Back to comparison
NVIDIA
NVIDIA GeForce RTX 4060 Ti 16GB
Ada Lovelace · RTX 40 Series · ACTIVE
We earn commissions from purchases made through links on this site. This doesn't affect our rankings or recommendations.
Prices updated 5 days ago
Specifications
| VRAM | 16 GB |
| Memory type | GDDR6 |
| Bus width | 128-bit |
| Memory bandwidth | 288 GB/s |
| CUDA cores | 4,352 |
| Tensor cores | 136 |
| FP16 | 44.1 TFLOPS |
| TDP | 165W |
| Power connector | 8-pin |
| Card length | 240 mm |
| Slot width | 2 slots |
| PCIe | Gen 4 x8 |
| CUDA compute | 8.9 |
| Max model (Q4) | ~28B parameters |
Inference Benchmarks (Q4_K_M)
Llama 3.3 8B
42.0 tok/s
Qwen 3 32B
10.0 tok/s*
Llama 3.3 70B
—
llama.cpp, batch_size=1, ctx=4096, single GPU. Values marked with * are estimated.
What Can You Run?
| Model | Q4_K_M | Q8_0 | FP16 |
|---|---|---|---|
| Llama 3.3 8B8B | Excellent~42 tok/s | Usable | Won't fit |
| Llama 3.3 70B70.6B | Won't fit | Won't fit | Won't fit |
| Qwen 3 8B8.2B | Usable | Usable | Won't fit |
| Qwen 3 32B32.8B | Won't fit~10 tok/s | Won't fit | Won't fit |
| DeepSeek R1 70B70.6B | Won't fit | Won't fit | Won't fit |
| Mistral Nemo 12B12.2B | Usable | Usable | Won't fit |
| Phi-4 14B14B | Usable | Won't fit | Won't fit |
| Gemma 3 27B27.4B | Won't fit | Won't fit | Won't fit |
| Codestral 25B25.3B | Won't fit | Won't fit | Won't fit |
| Command R 35B35B | Won't fit | Won't fit | Won't fit |
Notes
Good VRAM for the price but narrow 128-bit bus limits bandwidth for large models.