As long as the respective IO die design can be reused I don't think we'll see PCIe 6.0 until consumer CXL is really a thing. It just doesn't seem worth the design cost for a platform that doesn't have an actual use for it.Consider that PCIe 6.0 runs at the same frequency as PCIe 5.0 and the main source of additional power is its encoding. Unlike I/O, the logic implementing that can potentially improve in energy efficiency with more advanced process nodes. It should absolutely be an efficiency win to replace a PCIe 5.0 link with a PCIe 6.0 link that's half the width.
Another argument for PCIe 6.0 to reach consumers is its support by CXL 3.0. If CXL ever trickles down (and signs are that it will), then I'd say PCIe 6.0 is likely to follow.
Also, to the extent that PCIe 6.0 controllers are burning lots of power, consider that they're potentially talking about server CPUs implementing 128+ lanes, which is on a very differente scale than what any consumer CPUs implement.
Since the board costs should be largely identical to PCIe 5.0 implementation I don't see where large amounts of extra cost would come from either.I'm not sure what that's based on, but let's wait and see how much cost it adds to server boards.
If/when PCIe 6.0 reaches consumers, I think there will have been multiple I/O die redesigns from both Intel + AMD, as well as new sockets.As long as the respective IO die design can be reused I don't think we'll see PCIe 6.0
Yup, and I'm pretty sure that's the reason every leak so far indicates no GPU IP changes for desktop. Until/unless AMD is able to decouple GPU from the rest I don't think they'll change anything there before a full IO die redesign.BTW, note that AMD put their iGPU in the I/O die. So, each time they want to update that will mean a new I/O die.
Just imagine what PCIe6 is going to be like. I think that it's something that would be better to put off until they have this problem taken care of because as fast as these are, the system will still bottleneck them. Since this mitigates any real speed advantage, it makes the temperature disadvantage a disqualifying aspect of these new drives.I mean... PCIe standards tend to roll out there first because they need the performance and can bear the cost, but so far they've always made their way down to consumer-grade PCs.
Early PCIe 4.0 chipsets needed active cooling, which got solved, but PCIe 4.0 SSDs got a reputation for running warm and PCIe 5.0 drives have been worse, and there hasn't been a lot of progress on that one. I don't think most gamers are worried they're gonna have PCIe thermal throttling in 2024 on existing hardware, but there's a worry (some valid and some misplaced) that if things keep going the way they seem to be going then PCIe implementations are going to be an increasing problem.
Yeah, it's like, I see a lot of people whining about the lack of PCIe4 on most X670(E) motherboards but they somehow haven't realised that putting USB4 on them when there are hardly any USB4 devices out there would just drive up prices for nothing.As long as the respective IO die design can be reused I don't think we'll see PCIe 6.0 until consumer CXL is really a thing. It just doesn't seem worth the design cost for a platform that doesn't have an actual use for it.
Since the board costs should be largely identical to PCIe 5.0 implementation I don't see where large amounts of extra cost would come from either.
If you wanted USB4 support in card form most likely getting a TB4 card would be the way to go. You can even use them in non supported motherboards usually, but you lose the PCIe hot swap. So technically it's already possible for less money to add it in.Like, if I want USB4 and didn't want to get completely fleeced, I'd have to opt for something like an ASRock X670E Taichi which costs over $400. OTOH, if I was just willing to "make due" with USB3.2, then I'd be able to get something like an ASRock X670E PG Lightning or X670E RS Pro. Those motherboards cost "only" $239 and $220, respectively. Still, I have a hard time believing that a PCIe USB4 expansion card will cost $170-$180 and since nobody seems to use expansion cards anymore (and I can't figure out why not), it's rare that you're going to find someone who doesn't have a spare slot to use. That way, nobody's being forced to adopt a new and better standard that they literally won't be able to use. Right now, USB4 expansion cards are extremely expensive (like, over $300CAD), but there's only one on the market that I could find. In a few years, when USB4 is somewhat less obscure, I expect that they'd cost ¼ that price, probably around $75CAD.
Lack of expansion slots on modern motherboards is actually why I was agonizing over building a new server box until Asus released the Pro WS W680-ACE. So I have 7x M.2 drives, 5x HDDs and a dual port 10Gb card with 3 SATA ports and one PCIe 5.0 slot (x8) open. Other than the two PCIe 5.0 slots being too close together to use a non water cooled video card and use the below slot it has the best IO layout of any non high end workstation/server board I've seen. Then there's the irony of the board also being cheaper than anything else with equivalent IO/features on the consumer side, by a lot. I really wish motherboard manufacturers in general did a better job of maximizing IO on modern boards as there's generally plenty of lanes to do it with.I actually do use my expansion slots when I need to. For example, AM5 boards seem to have a maximum of only 4 on-board SATA ports and since I already have six non-external HDDs, I bought one of these:
OMG. What kind of controller does it have? Looks like it gets hot?I actually do use my expansion slots when I need to. For example, AM5 boards seem to have a maximum of only 4 on-board SATA ports and since I already have six non-external HDDs, I bought one of these:
Heh, depending on which controller chip it uses, that might actually limit its longevity. I'd opt for one of the standard SATA controllers you tend to find on motherboards, like JMicron, since you can bet its driver will be well-tested and bundled with all upcoming OS releases for many years.The best thing is that it'll still be usable with many platforms to come because PCIe isn't going anywhere and PCIe3 x1 is more than enough to handle SATA devices
If I were actually using 16 HDDs, I'd probably want more than PCIe 3.0 x1 connectivity! Modern HDDs can exceed 300 MB/s each, so that's only enough for a 4-disk RAID-5 not to bottleneck. At x1, it should at least be PCIe 4.0, but more common would be to find PCIe 3.0 x4 cards.(and I would only use two at a time at the most if I were transferring files from one to another). This ensures that my drives will always be usable until they're dead, even if I get a tonne more and I won't have a mess of hard drive enclosures with the plugged-in power cords that they require (which can become a rat's nest very quickly).
I gave you some reasons: driver stability, driver OS-support, heat, and power.Ultimately, do I care that it's overkill? Nope.
Not directed at you, but im truly curious. Are these pcie 6 drivers truly less efficient? Cause using more power doesn't really mean they are less efficient. If they finish the task at half the time then they might end up using less power. Have no idea, just wondering if someone actually does, cause everyone keeps talking about efficiency with no real data?Consider that PCIe 6.0 runs at the same frequency as PCIe 5.0 and the main source of additional power is its encoding. Unlike I/O, the logic implementing that can potentially improve in energy efficiency with more advanced process nodes. It should absolutely be an efficiency win to replace a PCIe 5.0 link with a PCIe 6.0 link that's half the width.
Another argument for PCIe 6.0 to reach consumers is its support by CXL 3.0. If CXL ever trickles down (and signs are that it will), then I'd say PCIe 6.0 is likely to follow.
Also, to the extent that PCIe 6.0 controllers are burning lots of power, consider that they're potentially talking about server CPUs implementing 128+ lanes, which is on a very differente scale than what any consumer CPUs implement.
I'm not sure what that's based on, but let's wait and see how much cost it adds to server boards.
When it comes to enterprise workload efficiency is most often a driver of increased rack density. So while replacing old hardware for smaller businesses could have a massive footprint and power decrease for bigger businesses it just means more compute density. PCIe 6.0 is also necessary to drive 800GbE NICs as right now they're limited to 400GbE due to the PCIe 5.0 interface and very rarely are interconnects not doing work.Not directed at you, but im truly curious. Are these pcie 6 drivers truly less efficient? Cause using more power doesn't really mean they are less efficient. If they finish the task at half the time then they might end up using less power. Have no idea, just wondering if someone actually does, cause everyone keeps talking about efficiency with no real data?
Yeap, I get that but I was talking about the perspective of the end user. You are correct, efficiency on big business is measured completely differently.When it comes to enterprise workload efficiency is most often a driver of increased rack density.
I'm not sure how efficiency compares between 5.0 and 6.0, but the point certainly isn't to finish the same work in less time. The main reason to use higher speeds is because the workload size is increasing. Server CPUs are getting more cores and that means more processes that all that need to access networking, storage, etc. Also, compute accelerators are getting faster and require more data.Are these pcie 6 drivers truly less efficient? Cause using more power doesn't really mean they are less efficient. If they finish the task at half the time then they might end up using less power.