• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

[SOLVED] What is the best cpu for the new RTX 3080/3090

NoLongerHuman

Commendable
Sep 3, 2020
59
2
1,545
As the title says what would be the best graphics card. Intel doesn’t yet support pcie 4.0 so would an Intel chip become a bottleneck? What about AMD which does support it? Will it make a noticeable difference?
 
Solution
New 11th gen Intel Tiger Lake chips were just announced so we'll have to wait and see.
Tiger Lake is for laptops only. The 11th-gen desktop line will be 14nm+++++ Rocket Lake.

Intel doesn’t yet support pcie 4.0 so would an Intel chip become a bottleneck?
It is highly unlikely that 3.0x16 vs 4.0x16 will make much of a difference on GPUs with 8+GB of VRAM. GPUs that currently benefit the most from PCIe bandwidth are entry-level models with 4GB or less in scenarios where the GPU runs out of VRAM and have to use some system RAM.

Since AMD CPUs usually struggle with keeping up with a 2080Ti, you will likely need to wait for Zen 3 for an AMD CPU that can close the gap at high refresh rates.

I’m going for 4K 60
For 60Hz...
What resolution are you playing at? That matters.
Current resolutions don't push enough data through pcie 3.0 to saturate it.

Storage is the only place you're gonna see pcie 4.0 make a difference right now. Even with the Ampere cards. Don't buy in to the marketing hype.

Ryzen 3950x will be fine, Comet Lake 10600K would be fine. You can't lose using either if you're wanting top tier chip and top tier GPU.
 
  • Like
Reactions: Phaaze88
What resolution are you playing at? That matters.
Current resolutions don't push enough data through pcie 3.0 to saturate it.

Storage is the only place you're gonna see pcie 4.0 make a difference right now. Even with the Ampere cards. Don't buy in to the marketing hype.

Ryzen 3950x will be fine, Comet Lake 10600K would be fine. You can't lose using either if you're wanting top tier chip and top tier GPU.
I’m going for 4K 60
 
New 11th gen Intel Tiger Lake chips were just announced so we'll have to wait and see.
Tiger Lake is for laptops only. The 11th-gen desktop line will be 14nm+++++ Rocket Lake.

Intel doesn’t yet support pcie 4.0 so would an Intel chip become a bottleneck?
It is highly unlikely that 3.0x16 vs 4.0x16 will make much of a difference on GPUs with 8+GB of VRAM. GPUs that currently benefit the most from PCIe bandwidth are entry-level models with 4GB or less in scenarios where the GPU runs out of VRAM and have to use some system RAM.

Since AMD CPUs usually struggle with keeping up with a 2080Ti, you will likely need to wait for Zen 3 for an AMD CPU that can close the gap at high refresh rates.

I’m going for 4K 60
For 60Hz, almost anything from the Ryzen 3600 or i5-10600 up should be fine.
 
Solution
Tiger Lake is for laptops only. The 11th-gen desktop line will be 14nm+++++ Rocket Lake.


It is highly unlikely that 3.0x16 vs 4.0x16 will make much of a difference on GPUs with 8+GB of VRAM. GPUs that currently benefit the most from PCIe bandwidth are entry-level models with 4GB or less in scenarios where the GPU runs out of VRAM and have to use some system RAM.

Since AMD CPUs usually struggle with keeping up with a 2080Ti, you will likely need to wait for Zen 3 for an AMD CPU that can close the gap at high refresh rates.


For 60Hz, almost anything from the Ryzen 3600 or i5-10600 up should be fine.
What about pcie 4.0? Will and benefit more?
 
What about pcie 4.0? Will and benefit more?
It's not going to be close to filling the bandwidth of 3.0 so any difference will be very little if any of course under certain conditions their will be a few exceptions.


From just the bandwidth prospective PCI-E 2.0 X16 was just saturated with a 2080ti
Note that 3.0 X8 is a bit less than 2.0 X16.

https://tpucdn.com/review/nvidia-ge...ress-scaling/images/pci-express-bandwidth.png

https://tpucdn.com/review/nvidia-ge...ing/images/relative-performance_1920-1080.png

I'm buying a 3080 to pair with my 10600K.
 
  • Like
Reactions: NoLongerHuman
What about pcie 4.0? Will and benefit more?
Games don't saturate the PCI lanes enough to see real world performance gains. Now, that may change when games start optimizing using Amperes RTX IO architecture that allows direct decompression of game data to the card instead of routing it through the CPU...but that is still a long ways off.

Right now PCI-E 4.0 real world performance only benefits NVME storage. So unless you NEED to do some high rate data transfers, it's there for the future and not something right now. Who knows when Games will be feeding enough data through the PCI-E pipe that it would saturate 3.0 standards. That's 32 GB/sec on 16x.

Consider Call of Duty Warzone, it is a hefty game at 200GB...You don't need to decompress a whole game to play it, only parts. It would be inefficient to decompress the whole thing. However, if you are transferring Terabytes of data to NVME drives, that is different.
 
Last edited:
  • Like
Reactions: Phaaze88
Tiger Lake is for laptops only. The 11th-gen desktop line will be 14nm+++++ Rocket Lake.


It is highly unlikely that 3.0x16 vs 4.0x16 will make much of a difference on GPUs with 8+GB of VRAM. GPUs that currently benefit the most from PCIe bandwidth are entry-level models with 4GB or less in scenarios where the GPU runs out of VRAM and have to use some system RAM.

Since AMD CPUs usually struggle with keeping up with a 2080Ti, you will likely need to wait for Zen 3 for an AMD CPU that can close the gap at high refresh rates.


For 60Hz, almost anything from the Ryzen 3600 or i5-10600 up should be fine.

So you said that AMDs CPUs struggle with the 2080Ti? What about the Intel chips. I know you said they should be fine but I’m not looking for fine I’m trying to have a beastly machine to run these games 4k60
 
Right now PCI-E 4.0 real world performance only benefits NVME storage.
And GPUs when they run out of VRAM as demonstrated by AMD's 4GB RX5500 thanks to axing PCIe to x8, wouldn't be as bad on 3.0x16 as demonstrated by the 1650S holding up much better through scenarios that crush the 4GB RX5500 on 3.0x8.

With GPU performance doubling across the board, 4GB isn't going to cut it at the entry-level without 4.0x16 for backup and even 8GB on higher-end boards will get uncomfortably tight.
 
And GPUs when they run out of VRAM as demonstrated by AMD's 4GB RX5500 thanks to axing PCIe to x8, wouldn't be as bad on 3.0x16 as demonstrated by the 1650S holding up much better through scenarios that crush the 4GB RX5500 on 3.0x8.

With GPU performance doubling across the board, 4GB isn't going to cut it at the entry-level without 4.0x16 for backup and even 8GB on higher-end boards will get uncomfortably tight.
100% right forgot about that! Thank you!