News Alder Lake Chipsets Will Not Support PCIe Gen 5.0

kal326

Distinguished
Dec 31, 2007
1,230
109
20,120
Not really surprising considering the 500 series chipsets didn’t have native 4.0 lanes either. So a 600 series doing the same with the chipset lanes prior gen only seems a solid rumor.
 
  • Like
Reactions: Makaveli

InvalidError

Titan
Moderator
Not really surprising considering the 500 series chipsets didn’t have native 4.0 lanes either. So a 600 series doing the same with the chipset lanes prior gen only seems a solid rumor.
It wouldn't make sense for the chipset to support a bunch of 5.0 connectivity when the chipset only has a 4.0x4 or 4.0x8 link to the CPU either. First-gen 5.0 PHYs will likely be power hogs too and we've all seen how happy people were with little chipset fans on X570 boards.
 
  • Like
Reactions: Makaveli

kal326

Distinguished
Dec 31, 2007
1,230
109
20,120
It wouldn't make sense for the chipset to support a bunch of 5.0 connectivity when the chipset only has a 4.0x4 or 4.0x8 link to the CPU either. First-gen 5.0 PHYs will likely be power hogs too and we've all seen how happy people were with little chipset fans on X570 boards.
I agree that 5.0 in the chipset doesn’t make sense any time soon. Outside tons of nvme, 4.0 doesn’t have a lot of usage now. Add into that graphics cards aren’t even hindered by 4.0 either.
I wouldn’t be surprised if AMD stuck with 4.0 boards with Ryzen 4 as well. As for my X570, the fan doesn’t really bother me. It seems to only come on for a moment at cold boot.
 
  • Like
Reactions: Makaveli
Makes sense why Zen 4 is staying Pcie 4.0 + DDR5 in its first version.

Most consumers won't see a benefit from 5.0 at the start. Servers and high end workstations are a different story.

And this is a good reason for me to skip the first gen Zen 4 and wait for a refresh of that.
 

vern72

Distinguished
Jul 15, 2012
322
59
18,860
And really, is there any reason to have PCIe 5.0 soon (at least in the consumer space) when PCIe 4.0 capable systems haven't even surpassed PCIe 3.0 ones yet?

I'm surprised that PCIe 5.0 came out so soon after 4.0. I thought manufacturers would jump straight to 5.0 but I've heard it's significantly more expensive to manufacture 5.0 boards.
 

InvalidError

Titan
Moderator
I'm surprised that PCIe 5.0 came out so soon after 4.0. I thought manufacturers would jump straight to 5.0 but I've heard it's significantly more expensive to manufacture 5.0 boards.
I'm not surprised that 5.0 came out so soon, the need for something faster than 4.0 became imminent in the server space the instant NVMe SSDs became a thing. As for the expense of 5.0, every time you double the speed, you cut your combined timing, noise and other margins in half, which make specs that much more difficult to meet. What does surprise me is that the PCI-SIG has managed to increase bandwidth by 14X without changing the connectors.
 

watzupken

Reputable
Mar 16, 2020
1,181
663
6,070
I'm not surprised that 5.0 came out so soon, the need for something faster than 4.0 became imminent in the server space the instant NVMe SSDs became a thing. As for the expense of 5.0, every time you double the speed, you cut your combined timing, noise and other margins in half, which make specs that much more difficult to meet. What does surprise me is that the PCI-SIG has managed to increase bandwidth by 14X without changing the connectors.
Server space, I agree. But Alder Lake is meant for consumers isn't it? May be Sapphire Rapids may arrive with PCI-E 5.0? On the consumer space, there are not that many users that can fully utilizing the PCI-E 4.0 bandwidth.
 

JayNor

Honorable
May 31, 2019
458
103
10,860
Tom Lantzsch mentioned high end desktop chips a few times in his interview this week. The edge is becoming a first stage for processing camera streams and ai inference.

Sapphire Rapids reportedly has 80 lanes of pcie5 and Alder Lake reportedly has 16 lanes of pcie5. We haven't seen if Alder Lake has enough logic to be a cxl slave. That would be interesting.

Anyway ... the 16 lanes of pcie5 on Alder Lake-S are double the data rate of pcie4, and the 8 Gracemont cores have special instructions to support io. They are also being used in the Grand Ridge chips.
 

JayNor

Honorable
May 31, 2019
458
103
10,860
Intel has the DDIO feature, which bypasses the core caches, sending pcie data to L3 cache shared by all cores.

Having the 16 pcie5 lanes connected to the cores makes sense ... just as the 20 pcie4 lanes on TGL-H and Rocket Lake are connected directly to cores. This gives them all DDIO direct to L3 access.

PCH has the 8 lane DMI bottleneck.
 

JayNor

Honorable
May 31, 2019
458
103
10,860
pcie5 is 32GT/sec ... so 32Gb/sec per lane each way. lanes are bidirectional.

SMPTE ST-2083 24G-SDI 24 Gbit/s 2160p/4k@120,8k@60

so, looks like someone wanting to capture 16 high res camera streams could make use of the 16 lanes of pcie5 on Alder Lake.

Makes sense to me that IOTG would be interested in pcie5, even if consumers don't have a use for this kind of performance.
 

InvalidError

Titan
Moderator
Server space, I agree. But Alder Lake is meant for consumers isn't it? May be Sapphire Rapids may arrive with PCI-E 5.0? On the consumer space, there are not that many users that can fully utilizing the PCI-E 4.0 bandwidth.
In the consumer space, PCIe 5.0x16 will be great for keeping 4GB GPUs viable. That is at least one possible use once compatible GPUs become available.

PCH has the 8 lane DMI bottleneck.
In the 500-series, only the Z590 has x8, the others are still on 3.0x4. Intel may do the same on the 600-series with B/H only having 4.0x4.
 

InvalidError

Titan
Moderator
That's not really a problem considering most of that bandwidth isn't even use most of the time.
If you have a 10th-gen CPU, the only NVMe slots you can use are on the chipset and a single 3.0x4 NVMe can already use close to 100% of DMI 3.0x4's bandwidth. I just benchmarked my NVMe SSD (SN750) and got 3.4GB/s read with 2.7 GB/s write. Add any other high-bandwidth peripheral and you end up with a potential chipset IO bottleneck whenever both get used at the same time.

I don't particularly like the idea of a single device potentially hogging all of the chipset bandwidth, especially when that device is primary storage which does get used on a regular basis.
 
If you have a 10th-gen CPU, the only NVMe slots you can use are on the chipset and a single 3.0x4 NVMe can already use close to 100% of DMI 3.0x4's bandwidth. I just benchmarked my NVMe SSD (SN750) and got 3.4GB/s read with 2.7 GB/s write. Add any other high-bandwidth peripheral and you end up with a potential chipset IO bottleneck whenever both get used at the same time.

I don't particularly like the idea of a single device potentially hogging all of the chipset bandwidth, especially when that device is primary storage which does get used on a regular basis.
I would imagine this would only be a problem if any of the peripherals connected to the chipset have to pass data to RAM or the CPU. If an NVMe drive is transferring to another NVMe drive for example, I'm pretty sure a DMA is setup and the two just talk to each other and not bother anything else in the system.

Also, the scenarios in which there are long sustained transfers is rare for the average joe.
 

InvalidError

Titan
Moderator
Also, the scenarios in which there are long sustained transfers is rare for the average joe.
They don't have to be long or sustained, they only need to be coincidental to cause hiccups and the likelihood of having enough coincidental traffic to get there is much higher when one single device can achieve 100% bus utilization.

NVMe SSDs don't know file systems, they'd need close CPU supervision to do scatter-gather-scatter between the two drives for master-to-master direct copy and you need SSDs that support SGLs in the first place.