News If you think PCIe 5.0 runs hot, wait till you see PCIe 6.0's new thermal throttling technique

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Consider that PCIe 6.0 runs at the same frequency as PCIe 5.0 and the main source of additional power is its encoding. Unlike I/O, the logic implementing that can potentially improve in energy efficiency with more advanced process nodes. It should absolutely be an efficiency win to replace a PCIe 5.0 link with a PCIe 6.0 link that's half the width.

Another argument for PCIe 6.0 to reach consumers is its support by CXL 3.0. If CXL ever trickles down (and signs are that it will), then I'd say PCIe 6.0 is likely to follow.

Also, to the extent that PCIe 6.0 controllers are burning lots of power, consider that they're potentially talking about server CPUs implementing 128+ lanes, which is on a very differente scale than what any consumer CPUs implement.
As long as the respective IO die design can be reused I don't think we'll see PCIe 6.0 until consumer CXL is really a thing. It just doesn't seem worth the design cost for a platform that doesn't have an actual use for it.
I'm not sure what that's based on, but let's wait and see how much cost it adds to server boards.
Since the board costs should be largely identical to PCIe 5.0 implementation I don't see where large amounts of extra cost would come from either.
 
BTW, note that AMD put their iGPU in the I/O die. So, each time they want to update that will mean a new I/O die.
Yup, and I'm pretty sure that's the reason every leak so far indicates no GPU IP changes for desktop. Until/unless AMD is able to decouple GPU from the rest I don't think they'll change anything there before a full IO die redesign.
 
I mean... PCIe standards tend to roll out there first because they need the performance and can bear the cost, but so far they've always made their way down to consumer-grade PCs.

Early PCIe 4.0 chipsets needed active cooling, which got solved, but PCIe 4.0 SSDs got a reputation for running warm and PCIe 5.0 drives have been worse, and there hasn't been a lot of progress on that one. I don't think most gamers are worried they're gonna have PCIe thermal throttling in 2024 on existing hardware, but there's a worry (some valid and some misplaced) that if things keep going the way they seem to be going then PCIe implementations are going to be an increasing problem.
Just imagine what PCIe6 is going to be like. I think that it's something that would be better to put off until they have this problem taken care of because as fast as these are, the system will still bottleneck them. Since this mitigates any real speed advantage, it makes the temperature disadvantage a disqualifying aspect of these new drives.
 
  • Like
Reactions: slightnitpick
As long as the respective IO die design can be reused I don't think we'll see PCIe 6.0 until consumer CXL is really a thing. It just doesn't seem worth the design cost for a platform that doesn't have an actual use for it.

Since the board costs should be largely identical to PCIe 5.0 implementation I don't see where large amounts of extra cost would come from either.
Yeah, it's like, I see a lot of people whining about the lack of PCIe4 on most X670(E) motherboards but they somehow haven't realised that putting USB4 on them when there are hardly any USB4 devices out there would just drive up prices for nothing.

Like, if I want USB4 and didn't want to get completely fleeced, I'd have to opt for something like an ASRock X670E Taichi which costs over $400. OTOH, if I was just willing to "make due" with USB3.2, then I'd be able to get something like an ASRock X670E PG Lightning or X670E RS Pro. Those motherboards cost "only" $239 and $220, respectively. Still, I have a hard time believing that a PCIe USB4 expansion card will cost $170-$180 and since nobody seems to use expansion cards anymore (and I can't figure out why not), it's rare that you're going to find someone who doesn't have a spare slot to use. That way, nobody's being forced to adopt a new and better standard that they literally won't be able to use. Right now, USB4 expansion cards are extremely expensive (like, over $300CAD), but there's only one on the market that I could find. In a few years, when USB4 is somewhat less obscure, I expect that they'd cost ¼ that price, probably around $75CAD.

I actually do use my expansion slots when I need to. For example, AM5 boards seem to have a maximum of only 4 on-board SATA ports and since I already have six non-external HDDs, I bought one of these:
S2a03a8a4148e4f89b62df47ccdc5c69aR.jpg

The best thing is that it'll still be usable with many platforms to come because PCIe isn't going anywhere and PCIe3 x1 is more than enough to handle SATA devices (and I would only use two at a time at the most if I were transferring files from one to another). This ensures that my drives will always be usable until they're dead, even if I get a tonne more and I won't have a mess of hard drive enclosures with the plugged-in power cords that they require (which can become a rat's nest very quickly).

Since my PC case is a Server Super Tower with literally twelve HDD bays (7×5¼", 5×3½"). I figure that the motherboard should be able to support the functionality of the case, just as the case supports the functionality of the motherboard. If I ever fill the thing with HDDs or Optical Drives, this will be a literal godsend.

Ultimately, is it overkill? Yup.
Ultimately, do I care that it's overkill? Nope.

Sure, I could've purchased a 4-port, 8-port or 12-port but it cost me less than $40CAD so I wasn't going to be saving much (I think that the 4-port was $20). There was even a 20-port model but that thing cost over $70. I think that I did ok and if I want a USB4 controller card, I'll have space for that too on my AM4 X570 board. 😉🤣
 
Last edited:
  • Like
Reactions: 35below0
Like, if I want USB4 and didn't want to get completely fleeced, I'd have to opt for something like an ASRock X670E Taichi which costs over $400. OTOH, if I was just willing to "make due" with USB3.2, then I'd be able to get something like an ASRock X670E PG Lightning or X670E RS Pro. Those motherboards cost "only" $239 and $220, respectively. Still, I have a hard time believing that a PCIe USB4 expansion card will cost $170-$180 and since nobody seems to use expansion cards anymore (and I can't figure out why not), it's rare that you're going to find someone who doesn't have a spare slot to use. That way, nobody's being forced to adopt a new and better standard that they literally won't be able to use. Right now, USB4 expansion cards are extremely expensive (like, over $300CAD), but there's only one on the market that I could find. In a few years, when USB4 is somewhat less obscure, I expect that they'd cost ¼ that price, probably around $75CAD.
If you wanted USB4 support in card form most likely getting a TB4 card would be the way to go. You can even use them in non supported motherboards usually, but you lose the PCIe hot swap. So technically it's already possible for less money to add it in.
I actually do use my expansion slots when I need to. For example, AM5 boards seem to have a maximum of only 4 on-board SATA ports and since I already have six non-external HDDs, I bought one of these:
Lack of expansion slots on modern motherboards is actually why I was agonizing over building a new server box until Asus released the Pro WS W680-ACE. So I have 7x M.2 drives, 5x HDDs and a dual port 10Gb card with 3 SATA ports and one PCIe 5.0 slot (x8) open. Other than the two PCIe 5.0 slots being too close together to use a non water cooled video card and use the below slot it has the best IO layout of any non high end workstation/server board I've seen. Then there's the irony of the board also being cheaper than anything else with equivalent IO/features on the consumer side, by a lot. I really wish motherboard manufacturers in general did a better job of maximizing IO on modern boards as there's generally plenty of lanes to do it with.

At the consumer level the only way I'd be excited about PCIe 6.0 is if they were going to use it as the DMI link so we had something like PCIe 6.0 x4 connecting the CPU to chipset. That to me would be somewhat of a game changer because it would double the bandwidth available and give a lot more IO flexibility.
 
I actually do use my expansion slots when I need to. For example, AM5 boards seem to have a maximum of only 4 on-board SATA ports and since I already have six non-external HDDs, I bought one of these:
S2a03a8a4148e4f89b62df47ccdc5c69aR.jpg
OMG. What kind of controller does it have? Looks like it gets hot?

The best thing is that it'll still be usable with many platforms to come because PCIe isn't going anywhere and PCIe3 x1 is more than enough to handle SATA devices
Heh, depending on which controller chip it uses, that might actually limit its longevity. I'd opt for one of the standard SATA controllers you tend to find on motherboards, like JMicron, since you can bet its driver will be well-tested and bundled with all upcoming OS releases for many years.

(and I would only use two at a time at the most if I were transferring files from one to another). This ensures that my drives will always be usable until they're dead, even if I get a tonne more and I won't have a mess of hard drive enclosures with the plugged-in power cords that they require (which can become a rat's nest very quickly).
If I were actually using 16 HDDs, I'd probably want more than PCIe 3.0 x1 connectivity! Modern HDDs can exceed 300 MB/s each, so that's only enough for a 4-disk RAID-5 not to bottleneck. At x1, it should at least be PCIe 4.0, but more common would be to find PCIe 3.0 x4 cards.

Ultimately, do I care that it's overkill? Nope.
I gave you some reasons: driver stability, driver OS-support, heat, and power.
 
Consider that PCIe 6.0 runs at the same frequency as PCIe 5.0 and the main source of additional power is its encoding. Unlike I/O, the logic implementing that can potentially improve in energy efficiency with more advanced process nodes. It should absolutely be an efficiency win to replace a PCIe 5.0 link with a PCIe 6.0 link that's half the width.

Another argument for PCIe 6.0 to reach consumers is its support by CXL 3.0. If CXL ever trickles down (and signs are that it will), then I'd say PCIe 6.0 is likely to follow.

Also, to the extent that PCIe 6.0 controllers are burning lots of power, consider that they're potentially talking about server CPUs implementing 128+ lanes, which is on a very differente scale than what any consumer CPUs implement.


I'm not sure what that's based on, but let's wait and see how much cost it adds to server boards.
Not directed at you, but im truly curious. Are these pcie 6 drivers truly less efficient? Cause using more power doesn't really mean they are less efficient. If they finish the task at half the time then they might end up using less power. Have no idea, just wondering if someone actually does, cause everyone keeps talking about efficiency with no real data?
 
Not directed at you, but im truly curious. Are these pcie 6 drivers truly less efficient? Cause using more power doesn't really mean they are less efficient. If they finish the task at half the time then they might end up using less power. Have no idea, just wondering if someone actually does, cause everyone keeps talking about efficiency with no real data?
When it comes to enterprise workload efficiency is most often a driver of increased rack density. So while replacing old hardware for smaller businesses could have a massive footprint and power decrease for bigger businesses it just means more compute density. PCIe 6.0 is also necessary to drive 800GbE NICs as right now they're limited to 400GbE due to the PCIe 5.0 interface and very rarely are interconnects not doing work.

I think the most likely application for what this article is talking about in real world terms is for interconnects. While they're always working they aren't necessarily always working at maximum bandwidth which would allow for the shifting described here so long as it did so quickly enough.

As it is I tend to agree with @bit_user that PCIe 6.0 should be more efficient than PCIe 5.0 in terms of like for like bandwidth, but at worst it should be equal.
 
Are these pcie 6 drivers truly less efficient? Cause using more power doesn't really mean they are less efficient. If they finish the task at half the time then they might end up using less power.
I'm not sure how efficiency compares between 5.0 and 6.0, but the point certainly isn't to finish the same work in less time. The main reason to use higher speeds is because the workload size is increasing. Server CPUs are getting more cores and that means more processes that all that need to access networking, storage, etc. Also, compute accelerators are getting faster and require more data.

BTW, I think it's notable where laptop versions of the same generation CPU use an older version of PCIe, such as Strix Point, which AMD said uses PCIe 4.0 for power reasons.