News Nvidia Drops its First PCIe Gen 4 GPU, But It's Not for Gamers

I run dual 2080TI NVlinked together - not sure even the 2080TI is using all 16 PCIe3 lanes.
Are you running an i9 on X299 or Threadripper?
If neither, then those 2080Tis are running in x8 mode. PCIe 3.0 x8 has a theoretical max bandwidth of 7880MB/s.
The 2080Ti just manages to exceed that limit. Albeit minor, there is some performance lost.

The reason for my initial question is because those 2 platforms can run SLI x16/x16 mode.

Just for reference:
https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling

https://www.gamersnexus.net/guides/3366-nvlink-benchmark-rtx-2080-ti-pcie-bandwidth-x16-vs-x8
 
Are you running an i9 on X299 or Threadripper?
If neither, then those 2080Tis are running in x8 mode. PCIe 3.0 x8 has a theoretical max bandwidth of 7880MB/s.
The 2080Ti just manages to exceed that limit. Albeit minor, there is some performance lost.

The reason for my initial question is because those 2 platforms can run SLI x16/x16 mode.

Just for reference:
https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling

https://www.gamersnexus.net/guides/3366-nvlink-benchmark-rtx-2080-ti-pcie-bandwidth-x16-vs-x8
Doesn't matter - the NVLink is key to dual 2080Ti performance. Current iteration is Gigabyte Z390, i9900K at 5GHz, 32 GB DDR4 4000 (4133 XMP, never was able to get the full 4133), Samsung M.2 NVMe. Acer 4K 120/144Hz Gsync. NH-D15 SSO, Seasonic 1000W Prime Ti.

I have the same Gigabyte 1080Ti in my old system - wasn't the VG248QE the first monitor with hardware Gsync? I had one back in that era - got destroyed in a move. I have a Steelseries Mouse - a Sensei from 5 or 6 years ago, it will not die - been though several Razers and Logitechs - and the Sensei keeps truckin... haven't run a sound card in ages.

On Gamer's Nexus, the difference is minimal - pretty happy with the performance I get - will be placed next to the 8700K with their dual 1080Tis soon enough. I tend to turn shadows down a bit - I drop until I notice a diff, then back up. Not looking for any records.. Other than the minor tweak for 5GHz on all cores, I do not overclock - even the Gigabyte OC WIndforce 2080's are just at Gigabyte's OC. The only metric I care about is that is does what I want it to do. Dual 1080s - wanted at least 45fps at 4K and with dual 2080Tis I wanted something north of 100fps.
 
Doesn't matter - the NVLink is key to dual 2080Ti performance. Current iteration is Gigabyte Z390, i9900K at 5GHz, 32 GB DDR4 4000 (4133 XMP, never was able to get the full 4133), Samsung M.2 NVMe. Acer 4K 120/144Hz Gsync. NH-D15 SSO, Seasonic 1000W Prime Ti.

I have the same Gigabyte 1080Ti in my old system - wasn't the VG248QE the first monitor with hardware Gsync? I had one back in that era - got destroyed in a move. I have a Steelseries Mouse - a Sensei from 5 or 6 years ago, it will not die - been though several Razers and Logitechs - and the Sensei keeps truckin... haven't run a sound card in ages.

On Gamer's Nexus, the difference is minimal - pretty happy with the performance I get - will be placed next to the 8700K with their dual 1080Tis soon enough. I tend to turn shadows down a bit - I drop until I notice a diff, then back up. Not looking for any records.. Other than the minor tweak for 5GHz on all cores, I do not overclock - even the Gigabyte OC WIndforce 2080's are just at Gigabyte's OC. The only metric I care about is that is does what I want it to do. Dual 1080s - wanted at least 45fps at 4K and with dual 2080Tis I wanted something north of 100fps.
Ok, but you weren't sure if the 2080Ti uses all 16 lanes of PCIe 3.0. I went in a rather roundabout way of answering that with a "Well no(x16), but actually yes(x8)."
Sorry if I said more than necessary.
 
Gen 4 only benefits multi-gpu setups anyway, as far as gpus are concerned. Gen 3 is still fine, except if one has 2080Ti or better on SLI.
There are a lot of misconceptions surrounding this topic so let’s sets the record straight.

First of all when we talk about a single GPU the bandwidth it needs to communicate with the CPU is still nowhere near x8PCIe3. This can be seen with setups using Thunderbolt 3 (which is PCIex4) and egpu enclosures or even those M.2 to physical PCIex16 converters. There you can see tests with a 2080Ti on PCIex16 vs PCIex4 and in the worst-case gaming scenario the performance loss is just 10% (and most of the time is under 5%). If a single 2080Ti was anywhere nearing x8PCIE3 bandwidth for its communication with the cpu the performance loss would be closer to 50% when using PCI3x4.

In fact the same performance penalty can be seen with lower-end cards too so it seems a certain fixed bandwidth is needed for CPU-GPU communication that has nothing to do with the performance of the gpu. Which to me is not surprising given that all modern cards feature dynamic parallelism, meaning kernels running on the GPU are able to dispatch other kernels within the gpu (before, only the CPU could invoke kernels so the GPU had to always communicate back to the CPU each time a kernel needed to be called).

If you have a multi-gpu setup and you use a bridge (NLVink, SLI) you still don’t need x8 PCIe3. You only need that much (and sometimes a bit more), if and only if you are NOT using a bridge and you are therefore using the pcie bandwidth for gpu to gpu communication. Now if you use a pointless artificial benchmark specifically devised just to show the bandwidth of a PCIe interface like AMD misleadingly presented last year with PCie4 then sure you will see a ‘difference’.