News AMD's Zen 6-based desktop processors may feature up to 24 cores

I could see a 12 core CCD on the horizon, but my concern becomes the cost of entry factor. The lowest available consumer chips have launched at relatively high prices since Zen 3 and now those core counts would be higher. While I'm sure there's plenty of margin on all of these parts it would mean AMD eating into them a bit to keep current pricing given a newer node and larger die.

The other concern comes in at the top end with the question of memory bandwidth. Currently they're supporting 16 cores on 5200/5600 and this would go up to 24 cores on 6400 most likely (though with clock drivers having better availability maybe 7200). That means a 50% core count increase while adding 14-29% bandwidth. It's not guaranteed that this will be a problem, but it was starting to be an issue with Zen 3 (resolved with the move to DDR5 with Zen 4+).

Overall moving the 8 core parts to 12 would be a pretty nice MT benefit, and the lower dual CCD part would no longer be hampered by 6 core CCDs so that's good as well. It's just at the low and high end where questions remain.

Of course without more PCIe lanes or at least a PCIe 5.0 link to the chipset(s) the productivity improvements brought with higher core counts is still lacking.
 
Of course without more PCIe lanes or at least a PCIe 5.0 link to the chipset(s) the productivity improvements brought with higher core counts is still lacking.
This is probably the biggest sore in the current crop of AM5 chipsets. I can’t believe motherboard manufacturers are boasting about having three extra PCIe 4.0 M.2 slots, when they are all funneled through a single PCIe 4.0 × 4 link, shared with all the SATA, USB, LAN, and Wi-Fi traffic. It’s not even unrealistic that one might occasionally utilize all of the bandwidth available and bottleneck one thing or another.

It’s the same situation with Thunderbolt 5, the only implementation being a controller with a PCIe 4.0 × 4 (nominally 64 Gb/s) host interface, which falls short supporting the even basic 80 Gb/s of data (a.k.a., no video) traffic of a single Thunderbolt connection.
 
I could see a 12 core CCD on the horizon, but my concern becomes the cost of entry factor. The lowest available consumer chips have launched at relatively high prices since Zen 3 and now those core counts would be higher.
A couple of points:
  • Zen 5 will probably have a long production life, just as AMD is continuing to sell Zen 3 CPUs for the budget-conscious, today.
  • AMD can continue to offer APUs as the lower-tier of their AM5 offerings.

The other concern comes in at the top end with the question of memory bandwidth. Currently they're supporting 16 cores on 5200/5600 and this would go up to 24 cores on 6400 most likely (though with clock drivers having better availability maybe 7200). That means a 50% core count increase while adding 14-29% bandwidth. It's not guaranteed that this will be a problem, but it was starting to be an issue with Zen 3 (resolved with the move to DDR5 with Zen 4+).
Fair points, but not everything on Zen 5 is memory-bottlenecked, today. Some things that are already limited would continue to bottleneck on DDR5-7200 (or faster!) with just 16 cores, yet other things still wouldn't be memory-bottlenecked at even 24 cores on DDR5-5600.

Basically, the more memory-bound you become, the lower the value proposition. It doesn't mean the value proposition is zero, because not everything is totally memory-bottlenecked, and for some users with low-bandwidth apps, the value would be quite high.

When AMD launched ThreadRipper Pro, Anandtech investigated the impact of running 64 cores on 4 channels vs. 8 and found that rendering workloads tended not to be much affected by running 16 cores per channel vs. 8. In fact, some gained more performance from the no-Pro's additional clockspeed than they lost from its halving of memory bandwidth!

Granted, that's using lower-throughput, lower-clocked Zen 2 cores, but I fully expect the same workloads aren't substantially bandwidth-limited on Zen 5, either.

Also, 3D-VCache should help some of those workloads which are. Now, they just need to give us a 3D cache die on both CCDs!

Overall moving the 8 core parts to 12 would be a pretty nice MT benefit,
AMD hasn't really mounted a proper response to Raptor Lake's core count increase, IMO. And we're nearing a point where Intel is rumored to be turning the dial up again. Zen 4 and DDR5 really helped, as did Zen 5's dual-decoder architecture, but AMD has lost the big lead it once had in MT performance (i.e. from the Zen 2 and early Zen 3 era) and needs to do more, to stay competitive.

However, I think this really isn't even about desktop, so much as server. We'd do well to keep in mind that most things happening with CCDs are really about server CPUs, with performance desktop & laptops being downstream beneficiaries. I think AMD is really looking to scale up full-fat Zen 6 to 192 cores, and after 4 generations of the 8-core CCD, it's finally time to bump it up to the next step.

Of course without more PCIe lanes or at least a PCIe 5.0 link to the chipset(s) the productivity improvements brought with higher core counts is still lacking.
I see them as separate issues. Yes, the chipset link should absolutely be PCIe 5.0. I always thought that's the place that made the most sense to upgrade, first. Let's hope Zen 6 will bring that to AM5, just like Zen 2 brought PCIe 4.0 to AM4.
 
  • Like
Reactions: thestryker
This is probably the biggest sore in the current crop of AM5 chipsets. I can’t believe motherboard manufacturers are boasting about having three extra PCIe 4.0 M.2 slots, when they are all funneled through a single PCIe 4.0 × 4 link, shared with all the SATA, USB, LAN, and Wi-Fi traffic. It’s not even unrealistic that one might occasionally utilize all of the bandwidth available and bottleneck one thing or another.
As I said above, I agree with this point. I think some people tend to exaggerate the impact, but it's totally fair - especially with the chipset daisy-chaining on their upper-end boards.

It’s the same situation with Thunderbolt 5, the only implementation being a controller with a PCIe 4.0 × 4 (nominally 64 Gb/s) host interface,
Doesn't it limit you to doing a max of 3 lanes in one direction and 1 lane in the other? PCIe is bidir, so you could handle 3 lanes of Thunderbolt downstream + 1 lane of upstream on a PCIe x4 link.
 
If the Ryzen 7 CPUs will have 12 cores and more L3 cache with the Zen 6 architecture ,and the Ryzen 5 CPUs will have 8 cores ,I hope AMD will keep the same L3 size for the Ryzen 5 CPUs as for the Ryzen 7 CPUs like in previous generations, so a monster gaming CPU will be born, like the 9800X3D but with much more cache as the main improvement, and it should last a while as the PS6 is rumored to keep the 8 core design CPU.
 
  • Like
Reactions: bit_user
If the Ryzen 7 CPUs will have 12 cores and more L3 cache with the Zen 6 architecture ,and the Ryzen 5 CPUs will have 8 cores ,I hope AMD will keep the same L3 size for the Ryzen 5 CPUs as for the Ryzen 7 CPUs like in previous generations, so a monster gaming CPU will be born, like the 9800X3D but with much more cache as the main improvement,
It's only a 50% increase per CCD, but it would be nice to get a 33% bump in core counts (i.e. 6 -> 8) and a 50% bump in L3 cache, in something like a 10600X.

Anyway, 48 MB of L3 is only half what you get with the current 9800X3D. That extra will help, but not so much.

it should last a while as the PS6 is rumored to keep the 8 core design CPU.
Really? Interesting. I was almost ready to bet the PS5 wouldn't have 8 cores, and yet I'm somehow surprised they're not going to 12. Eh, I guess it makes sense to put most of the silicon budget into the GPU/NPU and just try to offload as much as possible of the heavy compute to it. With AVX-512, the CPU cores will be a lot more powerful, anyhow.
 
However, I think this really isn't even about desktop, so much as server.
Oh it's absolutely about the server market and not desktop there's no doubt about that.
A couple of points:
  • Zen 5 will probably have a long production life, just as AMD is continuing to sell Zen 3 CPUs for the budget-conscious, today.
  • AMD can continue to offer APUs as the lower-tier of their AM5 offerings.
While I'm sure Zen 5 will likely be manufactured for years to come that doesn't change the second class citizen status. Of course I'm basing this on the assumption AMD will actually meaningfully improve IPC across the board with Zen 6 unlike the Zen 5 vs 4 comparison. If it's another repeat, but with higher core counts, then I don't think it makes any difference at all.

AMD's APU strategy after Zen 3 has left a lot to be desired due to how crippled the PCIe lane count can be. While Zen 3 APUs may have been PCIe 3.0 they all had 20 usable lanes. Zen 4 APUs having 10-16 depending on SKU is just a bad place to be. While I'd rather they had the same lane configuration as Zen 3 having minimum 12 usable lanes should be the approach if those are to truly fill the budget spot.
I see them as separate issues. Yes, the chipset link should absolutely be PCIe 5.0. I always thought that's the place that made the most sense to upgrade, first. Let's hope Zen 6 will bring that to AM5, just like Zen 2 brought PCIe 4.0 to AM4.
I don't really see them as separate at the desktop level since both largely just mean more expandability. Sure something like 32 native lanes for slots would be great, but at the same time nothing aside from storage can saturate the bandwidth. At that point the only benefit you're getting is lane count so the question becomes how many 16 lane devices (that need all of the lanes) are being used (or 3+ 8 lane devices I suppose). By doubling up chipset bandwidth it would be simple to add PCIe 4.0 x4 slots without sacrificing any IO connectivity. That's why I view them largely as two sides of the same coin for desktop products.

Of course if AMD/Intel wanted to bring back the old style HEDT which sat between standard desktop and enterprise that would be the perfect opportunity to leverage both!

Oh and just as an addition: while I very much want to see PCIe 5.0 chipset connections AMD needs it more than Intel.
 
The discussion is interesting but way above my knowledge. May I ask an off-topic question?

Would it be very useful to add support of ECC memory to consumer CPU? As opposed to an endless race to more cores? And maybe it's possible to do both?
 
I'm basing this on the assumption AMD will actually meaningfully improve IPC across the board with Zen 6 unlike the Zen 5 vs 4 comparison. If it's another repeat, but with higher core counts, then I don't think it makes any difference at all.
Zen 5 did improve IPC, but not by as much as previous gens. Maybe you're thinking of some some gaming IPC tests that showed negligible gains, but overall it definitely boosted IPC.

I'm sure Zen 6 will be another IPC boost, for three reasons, all based on an early interview with Mike Clark. First, he mentioned that there were things they wanted to do but were limited by the fact that the full Zen 5 cores were still on a N4-class node vs. the N3 node they used for Zen 5C. The second reason is that he initially got confused and thought the two decoder blocks could team up on a single instruction stream. This doesn't seem like a mistake one would make, unless a later version actually could and he just got mixed up which he was talking about. Finally, there were also a few other areas it sounded like they didn't have time to pursue improvements that it seems logical to assume they did in Zen 6.

Is it going to be a huge IPC boost? I wouldn't bet on it, but I think something on the order of a 10% gain is within the realm of possibility.

Zen 4 APUs having 10-16 depending on SKU is just a bad place to be.
Yeah, 10 is pretty poor. They ought to have at least x8 for dGPU and x4 for a M.2 slot. I think they don't really need x16 for the dGPU. By the time someone has a graphics card with enough horsepower to be bottlenecked by that, they can probably also get a faster CPU.

Also, I assume you're not including the chipset link in that. Bare minimum should be PCIe 4.0 x8 for dGPU + a full PCIe 4.0 x4 chipset link.

Of course if AMD/Intel wanted to bring back the old style HEDT which sat between standard desktop and enterprise that would be the perfect opportunity to leverage both!
I'm okay with Xeon W 2000-tier and non-Pro Threadrippers, as long as the prices are accessible. Anyway, I'm not really one to talk, as even if they were cheaper than they are, I still probably wouldn't want something so power-hungry.
 
The discussion is interesting but way above my knowledge. May I ask an off-topic question?

Would it be very useful to add support of ECC memory to consumer CPU? As opposed to an endless race to more cores? And maybe it's possible to do both?
AMD supports ECC memory on all of its desktop CPUs, but it's up to the motherboard to enable it.

Intel supports ECC on all 12th-14th generation xx500/Ultra 245 and higher desktop parts with integrated graphics and is functionally restricted by chipset.

As for usefulness most people would probably never notice the benefit. Do I think we'd be better off if all memory was ECC? Absolutely, but it costs more to manufacture due to needing additional memory IC on every module. Given margins in the OEM space (by far the largest memory customer) I just don't see this ever happening. This is likely why JEDEC implemented a basic error correction into the DDR5 spec to compensate for high clock speeds rather than just saying everything needs to be proper ECC.
 
Zen 5 did improve IPC, but not by as much as previous gens. Maybe you're thinking of some some gaming IPC tests that showed negligible gains, but overall it definitely boosted IPC.
That's why I said "across the board" as it was the lowest overall generational gain since AMD launched Ryzen.
Also, I assume you're not including the chipset link in that.
Yeah I just omitted the word usable.
I'm okay with Xeon W 2000-tier and non-Pro Threadrippers, as long as the prices are accessible.
Oh I agree, but they're not.
 
  • Like
Reactions: bit_user
It's only a 50% increase per CCD, but it would be nice to get a 33% bump in core counts (i.e. 6 -> 8) and a 50% bump in L3 cache, in something like a 10600X.

Anyway, 48 MB of L3 is only half what you get with the current 9800X3D. That extra will help, but not so much.


Really? Interesting. I was almost ready to bet the PS5 wouldn't have 8 cores, and yet I'm somehow surprised they're not going to 12. Eh, I guess it makes sense to put most of the silicon budget into the GPU/NPU and just try to offload as much as possible of the heavy compute to it. With AVX-512, the CPU cores will be a lot more powerful, anyhow.
So far we had a total of 96 MB of L3 cache for an 8-core X3D CPU for 3 generations, but a 10600X3D could have at least 144 MB.
The main rumor for the PS6 CPU for now is an 8 cores 16 threads Zen 5 X3D CPU that is a huge upgrade from the PS5 8 cores 16 threads Zen 2 CPU.
 
Last edited:
AMD supports ECC memory on all of its desktop CPUs, but it's up to the motherboard to enable it.

Intel supports ECC on all 12th-14th generation xx500/Ultra 245 and higher desktop parts with integrated graphics and is functionally restricted by chipset.

As for usefulness most people would probably never notice the benefit. Do I think we'd be better off if all memory was ECC? Absolutely, but it costs more to manufacture due to needing additional memory IC on every module. Given margins in the OEM space (by far the largest memory customer) I just don't see this ever happening. This is likely why JEDEC implemented a basic error correction into the DDR5 spec to compensate for high clock speeds rather than just saying everything needs to be proper ECC.
Oh really ECC already supported? Can I replace the current non-ECC memory modules by ECC ones? Hopefully they are pins compatible.

I don't think cost would be a concern. Those who want ECC will pay. People are ready to sell their kidneys to upgrade CPU with maximum cores, buy GPU at exorbitant prices. Paying a bit extra for ECC is negligible. In fact I checked on eBay that DDR4 ECC memory costs is similar to non-ECC. They just don't have fancy heat sinks.

Furthermore, most of people change their phones every year or two. In spite the old one is still working perfectly. Look like people will buy if the manufacturers just sell it. There is so much waste and useless stuffs people buy and throw away. Saving pennies on a useful feature like ECC looks rather a bad choice.

Maybe there is a deeper reason why ECC is not implemented by default.
 
id still like to see more pcie lanes.. 1 16 slot is fine, but having 2 or 3 4x slots would be good as well, but more lanes to utilize the usb, sata amd m.2 slots with out things having to be disabled if something is used... is crap
 
Doesn't it limit you to doing a max of 3 lanes in one direction and 1 lane in the other? PCIe is bidir, so you could handle 3 lanes of Thunderbolt downstream + 1 lane of upstream on a PCIe x4 link.
Forget about 3 lanes. The limit is bad enough to hinder even baseline Thunderbolt 5 PCIe bandwidth, which is 2 lanes.
  • USB4 v2.0 Gen 4 in asymmetric mode is good for up to 13.42 GB/s in one direction and 4.47 GB/s in the other.
    • In practice, the widespread limit of PCIe MPS to 256 bytes caps it to 12.69 GB/s.
  • USB4 v2.0 Gen 4 × 2 is good for up to 8.95 GB/s per direction.
    • A PCIe MPS of 256 bytes caps it to 8.46 GB/s.
  • PCIe 4.0 × 4 is good for up to 7.88 GB/s per direction, so it is the PCIe bandwidth bottleneck in the chain no matter what mode of operation the USB4 v2.0 Gen 4 connection is in.
For anyone to utilize even the full bandwidth of vanilla (symmetric) USB4 v2.0/Thunderbolt 5 for PCIe traffic in one direction, the host interface needs to have (extrapolated) PCIe 4.0 × 4.29. For maximum utilization of the asymmetric mode, PCIe 4.0 × 6.81. With two USB4 v2.0/Thunderbolt 5 ports, the host interface needs to be a minimum of PCIe 5.0 × 4 to not be the PCIe traffic bottleneck for any single port, assuming the controller will be oversubscribed and that typically the pair of ports won’t be operating in asymmetric mode with data flowing in the same direction simultaneously.

ASMedia’s USB4 controller’s PCIe 4.0 × 4 interface is to Thunderbolt 4 where Intel’s controller should have been PCIe 5.0 × 4 to Thunderbolt 5. It makes sense that the PCIe bandwidth should double commensurate with the doubled USB bandwidth, which Intel is yet again behind on. And ideally, the host should be PCIe 5.0 × 8, given that Thunderbolt 5 isn’t a straightforward doubling of its predecessor’s bandwidth capabilities, but more.

It’s a legit use case. Content creators are often either ingesting content or dumping content into external devices, stressing maximum unidirectional bandwidth.
 
Oh really ECC already supported? Can I replace the current non-ECC memory modules by ECC ones? Hopefully they are pins compatible.
They're pin compatible, but as I said it needs to be supported by the motherboard you're using.
I don't think cost would be a concern. Those who want ECC will pay. People are ready to sell their kidneys to upgrade CPU with maximum cores, buy GPU at exorbitant prices. Paying a bit extra for ECC is negligible.
I have 2x 32GB DDR5-5600 ECC UDIMMs that I paid what turns out to be a great price of ~$250 with shipping for the pair (the exact same modules are currently over $350 for two). Crucial 2x 32GB DDR5-5600 UDIMMs currently cost ~$135 so suggesting it's a "bit extra" is inaccurate unless you're talking DDR4 and that's only cheap because the market is being flooded compared to the demand. I've heard that the pricing is a mess due to lower demand causing lower volume manufacturing which means when there is demand the prices spike and stay that way for longer.

When it comes to OEMs, who as I mentioned are the drivers of client parts, adding at least one or two memory IC to every module could easily come out to millions in extra cost which they wouldn't necessarily be able to pass on to customers.

ECC UDIMMs (client memory) are only sold at JEDEC speeds currently which would be a huge negative to the enthusiast community. I'm not sure how well those modules would react to being overclocked and I have no interest in doing so with the system mine are in. I would try it with my pair of 4800 modules, but I don't have any other systems available that ECC works in.

RDIMMs running XMP/EXPO profiles do exist, but those also have registered clock drivers which might be what is keeping the ECC capability intact. Currently there are no ECC UDIMMs on the market using a client clock driver which has similar capability.

At the end of the day ECC just doesn't have a positive impact in the day to day operation for the vast majority of client uses which makes it a hard sell.