News AMD's Zen 6-based desktop processors may feature up to 24 cores

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Maybe there is a deeper reason why ECC is not implemented by default.
The main reason is that ECC is more costly for several reasons and not useful for day to day operations. It also was around 3% slower until recently where the error check logic has been moved on the memory controller of the CPU, before that the lost in frame rate was unpalatable to gamers.
 
  • Like
Reactions: The5thCacophony
Oh really ECC already supported? Can I replace the current non-ECC memory modules by ECC ones? Hopefully they are pins compatible.

I don't think cost would be a concern. Those who want ECC will pay. People are ready to sell their kidneys to upgrade CPU with maximum cores, buy GPU at exorbitant prices. Paying a bit extra for ECC is negligible. In fact I checked on eBay that DDR4 ECC memory costs is similar to non-ECC. They just don't have fancy heat sinks.

Furthermore, most of people change their phones every year or two. In spite the old one is still working perfectly. Look like people will buy if the manufacturers just sell it. There is so much waste and useless stuffs people buy and throw away. Saving pennies on a useful feature like ECC looks rather a bad choice.

Maybe there is a deeper reason why ECC is not implemented by default.
The reason is simple, as mentioned, due to price Vs gain. The price is considerable, and the gain is 0 for 99,9 percent of client users. So it hardly makes sense even if it was pennies. ECC is really about data integrity on a level nobody normal needs. Normal nonECC ram setup correctly can run without errors for very long until something like particle interference from the outside world, and remember that basically all digital data transfers are implementing all kinds of fancy error correction in an abstraction level not transparent to the user.

People tend to forget that last point. Digital data transfer is reliant on many clever tricks to avoid errors, and is able to correct a lot more than one would think. When you read a bandwidth specification of say 1gb/s , behind is a raw speed of (for example) 1,1gb/s , and that last part is signal integrity overhead. We always send more bits than we need to be able to 1. Spot if there is an error, and 2 (slightly more expensive) correct said errors.

Even when you have something like storage capacity, the raw capacity is always greater than the advertised usable capacity, to make room for integrity overhead. We actually assume in this world that data will be flawed, and such the entire structure is build to handle a good portion of completely normal errors. :)

The old hymn of "bits are just bits" is wrong on so many levels that even your 1st year on any information theory related education will completely change your perspective. Noise is still a problem, though it is correct that within some limit, the noise is impactless, but SN ratio (signal to noise) is also king in digital :) , and the reason everyone thinks this way is exactly because the error correction is so good and therefor even more transparent.
 
Last edited:
AMD supports ECC memory on all of its desktop CPUs, but it's up to the motherboard to enable it.
Not sure about the APUs (i.e. monolithic die processors). The non-Pro ones for socket AM4 don't. Take note!

An AMD motherboard which supports ECC memory should tell you in its user manual or detailed specs which CPUs support ECC memory.

As for usefulness most people would probably never notice the benefit.
I disagree. Even if it's not bad from the factory, DRAM wears out with use. I've had to debug software crashes at work that turned out to be caused by bad RAM (which I confirmed after running a memtest). So, what most people would notice is better reliability. Not much, but at the margins. They probably wouldn't know to attribute it to the RAM, however.

If you want better reliability, I'd also suggest using a better power supply and a UPS with line-interactive filtering. A long time ago, I saw an article that demonstrated a link between power quality and memory errors. Not sure anyone's looked at that, recently.

I should add that my prior experiences were with pre-DDR5 memory. DDR5 (along with GDDR7) has on-die ECC. I'm not sure if that's delivering a net improvement in reliability, or if it's only enough to compensate for smaller cell sizes, lower voltage, and longer refresh intervals.

Do I think we'd be better off if all memory was ECC? Absolutely, but it costs more to manufacture due to needing additional memory IC on every module. Given margins in the OEM space (by far the largest memory customer) I just don't see this ever happening.
In-band ECC is a nice compromise, because it lets you trade a little capacity and performance for enabling low-grade ECC on commodity memory (i.e. at no extra cost). Intel only supports it on some BGA SoCs, to my knowledge.

This is likely why JEDEC implemented a basic error correction into the DDR5 spec to compensate for high clock speeds rather than just saying everything needs to be proper ECC.
DDR5 incorporates only on-die ECC into the base spec. It doesn't benefit the memory bus, nor is your CPU aware of when the RAM is correcting for errors, meaning you don't get the same "early warning" that tells you to replace a failing ECC DIMM.

Oh really ECC already supported? Can I replace the current non-ECC memory modules by ECC ones? Hopefully they are pins compatible.
First, check your motherboard manual. Motherboards also have a QVL (Qualified Vendor List) for memory, which is basically their way of saying it's guaranteed to work with your board. These can be found on the motherboard's web page, under the "Support" section. Plenty of RAM will work that's not on that list, but there a slightly higher risk.

You can also use the online compatibility databases that some memory makers provide, like Kingston. They tend to keep their listings more up-to-date than the motherboard makers, which sometimes stop updating their QVLs like a year after a board model is introduced.

The main thing you want to be sure of is that the speed and type of memory are correct for your board. This includes making sure it's an Unbuffered DIMM - not Registered. Server and some workstation CPUs only support Registered DIMMs, but mainstream desktops only support Unbuffered.

I don't think cost would be a concern. Those who want ECC will pay.
Yeah, well... it is for some people/applications. With DDR4, you had to add 12.5% more memory chips to each DIMM (i.e. 9 chips instead of 8; 18 chips instead of 16). With DDR5, that overhead grew to 25%. This represents a cost floor, meaning a true, end-to-end ECC DIMM should now cost at least 25% more than the same quality DIMM with the same exact DRAM chips would normally cost. Market dynamics can swing this, but as @thestryker said, the ECC premium is usually much higher than 25%.

People are ready to sell their kidneys to upgrade CPU with maximum cores, buy GPU at exorbitant prices. Paying a bit extra for ECC is negligible. In fact I checked on eBay that DDR4 ECC memory costs is similar to non-ECC. They just don't have fancy heat sinks.
Be sure it's not used, or that you're getting a really good deal on it, if it is. I'm not sure manufacturers will honor their warranty, if you buy from anyone who's not an authorized reseller. In some countries, I think the warranty laws require them to, but not others.

Maybe there is a deeper reason why ECC is not implemented by default.
It's really just cost. I once ran some numbers on the performance overhead of the ECC circuitry in memory controllers and figured it should add a mere 2.5 picoseconds or so. Compare that to DRAM CAS latency, which is typically around 10 nanoseconds and you can see that it's almost nothing.

Here's a really old article that tested the performance impact of ECC memory. You can see that ECC vs. non-ECC UDIMM is virtually the same (probably within normal measurement variations), while the impact of Registered memory is sometimes as high as 3% (but usually much less):
 
RDIMMs running XMP/EXPO profiles do exist, but those also have registered clock drivers which might be what is keeping the ECC capability intact.
LOL, wut? An ECC DIMM doesn't do anything extra - it just has more DRAM chips and active data lines to the host. The ECC functionality (i.e. checking and computing checksums) for those redundant chips exists entirely within the host CPU's memory controller.

The only reason UDIMMs don't come in XMP/EXPO is just that the market for them would be too small. Most people wanting ECC UDIMMs aren't trying to overclock them.

Conversely, the reason why ECC RDIMMs with XMP/EXPO exist is that Xeon W and Threadripper will now only support RDIMMs and because they're a high-end platform, some of those users want to push their systems and squeeze out a little more performance. Also, they have deep pockets, given the high entry price of that tier. So, some memory makers see them as a ripe market opportunity.

Also, I suspect Intel & AMD might've worked with them to get higher-spec memory to benefit the benchmark numbers on these platform/CPUs. Conversely anyone benchmarking mainstream desktops isn't doing it with ECC memory.

Currently there are no ECC UDIMMs on the market using a client clock driver which has similar capability.
Eh, give it time. The ECC UDIMM market tends to lag non-ECC by 6 months or a year.
 
It also was around 3% slower until recently where the error check logic has been moved on the memory controller of the CPU, before that the lost in frame rate was unpalatable to gamers.
As I mentioned above, it has virtually no intrinsic speed disadvantage. The error-checking always happened in the memory controller, which was located in motherboard chipsets until about 15-20 years ago, and now is integrated directly into the CPU. This is where it must be, if you want to protect data in transit between the memory controller and DIMMs.

The main reason it's slower is what @thestryker said: you can only find it at JEDEC speeds and without XMP/EXPO support. The gaming memory out there provides support for higher clock speeds and lower latency, but that's really a market thing and not intrinsic to anything about how ECC DIMMs are designed or utilized by modern CPUs.

Normal nonECC ram setup correctly can run without errors for very long until something like particle interference from the outside world,
This is a common myth, but I've seen a lot of memory just go bad, at my job. DRAM wears out with use. Some even comes defective from the factory, and if you don't memtest it, ECC is the only way to catch that.

and remember that basically all digital data transfers are implementing all kinds of fancy error correction in an abstraction level not transparent to the user.
PCIe has some basic checksum mechanism - as does SATA - but not DRAM. If you're not using ECC DIMMs, then there's no protection of your data in transit between the CPU and DRAM.

Even when you have something like storage capacity, the raw capacity is always greater than the advertised usable capacity, to make room for integrity overhead.
SSDs and hard disks are different. There, the medium is fundamentally unreliable and wears out much more rapidly (requiring active maintenace). I guess DRAM is getting to the same point, which is why DDR5 incorporates on-die ECC.
 
Last edited:
Not sure about the APUs (i.e. monolithic die processors). The non-Pro ones for socket AM4 don't. Take note!
Yeah I don't really think about those when talking desktop CPUs as they brand them APU. I wish they sold the Pro SKUs, even in tray form, on the open market though.
In-band ECC is a nice compromise, because it lets you trade a little capacity and performance for enabling low-grade ECC on commodity memory (i.e. at no extra cost). Intel only supports it on some BGA SoCs, to my knowledge.
I'm curious what the performance hit would be in this case. I can't imagine using it in something that needed maximum performance, but much like running ECC at JEDEC specs it's about the reliability factor.
I disagree. Even if it's not bad from the factory, DRAM wears out with use. I've had to debug software crashes at work that turned out to be caused by bad RAM (which I confirmed after running a memtest). So, what most people would notice is better reliability. Not much, but at the margins. They probably wouldn't know to attribute it to the RAM, however.
I've actually never had DRAM die like this, knock on wood. The only time I ever had any die (aside from getting a bad kit) was back in the SDRAM days and overclocking Pentium IIIs.
I should add that my prior experiences were with pre-DDR5 memory. DDR5 (along with GDDR7) has on-die ECC. I'm not sure if that's delivering a net improvement in reliability, or if it's only enough to compensate for smaller cell sizes, lower voltage, and longer refresh intervals.
I think it's entirely about data integrity with what you mentioned along with the higher clocks. I'd also be surprised if it came out to be a net improvement.
LOL, wut? An ECC DIMM doesn't do anything extra - it just has more DRAM chips and active data lines to the host. The ECC functionality (i.e. checking and computing checksums) for those redundant chips exists entirely within the host CPU's memory controller.
I'm not familiar enough with how the RCD works so wasn't 100% sure whether or not the access it enables had any impact on the functionality.
 
Yeah I don't really think about those when talking desktop CPUs as they brand them APU. I wish they sold the Pro SKUs, even in tray form, on the open market though.
"Tray" SKUs are never sold directly to the public. If you see those for sale, it's because someone bought them in bulk and is just packaging & selling them individually, on their own. This is how I got the Xeon I used to build my workstation, a dozen years ago - some Intel reseller had just decided to offer that CPU to end users. I bought one and it arrived in whatever ad hoc packaging they decided to use.

I'm curious what the performance hit would be in this case. I can't imagine using it in something that needed maximum performance, but much like running ECC at JEDEC specs it's about the reliability factor.
Luckily, you don't have to wonder - Anandtech actually tested it!

I've actually never had DRAM die like this, knock on wood. The only time I ever had any die (aside from getting a bad kit) was back in the SDRAM days and overclocking Pentium IIIs.
At work, we have a variety of machines that are running 24/7 medium-duty workloads. I found one sad, old machine (Dell R710 server that was 10 years old, at the time), which actually had only one CPU installed, but two out of either four or six ECC DIMMs that were flagging memory errors - and one of them even had uncorrectable 2-bit errors! I forget if they were UDIMMs or RDIMMs, as those machines actually supported both.

Sadly, not all of these machines are using ECC memory, for reasons I can't go into (but related to cost). So, like I said, I've had the experience of trying to debug software crashes and, upon examining the backtraces (which were fairly nonsensical), decided to test the memory and confirmed that it was bad. These machines all were stable when deployed, which means the memory must have degraded. It's rare, but it happens. I can't put an exact percentage number on it, but probably in the lower single-digits.

DRAM is known to degrade with use. You can find some information on its wear rates, but probably nothing current. DRAM technology (i.e. circuits, cell designs, etc.) haven't fundamentally changed in a long time, AFAIK.

I think it's entirely about data integrity with what you mentioned along with the higher clocks. I'd also be surprised if it came out to be a net improvement.
Well, when Anandtech explained the (then new) DDR5 standard, they explained the on-die ECC as a way to counter reliability issues from shrinking cell sizes and increasing refresh intervals. Maybe also lower voltage - not sure about that one.
 
Please no! I want proper GPU, not useless NPU.
Leave the NPUs for stupid "AI PC" laptops.
It has to be everywhere (including desktops) or nowhere. The industry is choosing "everywhere" for the moment.

I think AMD will phone in the desktop iGPU as long as it can. It needs to at least double performance to catch up to Arrow Lake-S, but AMD doesn't want to cannibalize other product lines just yet.
I could see a 12 core CCD on the horizon, but my concern becomes the cost of entry factor. The lowest available consumer chips have launched at relatively high prices since Zen 3 and now those core counts would be higher. While I'm sure there's plenty of margin on all of these parts it would mean AMD eating into them a bit to keep current pricing given a newer node and larger die.
The leak is for a 75mm^2 CCD which isn't much larger than usual, although the node will be more expensive and the I/O die is rather large. It's going to be AMD's choice (or Intel's lack of competitiveness) that keeps an entry-level (8-core?) expensive. Previous-gen can fill any holes.
The other concern comes in at the top end with the question of memory bandwidth. Currently they're supporting 16 cores on 5200/5600 and this would go up to 24 cores on 6400 most likely (though with clock drivers having better availability maybe 7200). That means a 50% core count increase while adding 14-29% bandwidth.
I hope they can do better than 6400. Zen 4/5 used the same I/O die, so no major improvements to be found there. Much faster speeds than DDR5-7200 have been touted for years. Maybe official 7200 support, with the "sweet spot" at 8000?
AMD can continue to offer APUs as the lower-tier of their AM5 offerings.
On that note, no Strix Point desktop APUs sighted so far. Also, this may have been overlooked but the leak is stating that Zen 6 APUs (eventually desktop APUs if they make them) will use the same Zen 6 chiplets as desktop CPUs. The APUs are likely to be more expensive to make than the CPUs, even if some features like PCIe lanes are dialed back.
MLID is a clown.
You're the circus.
 
id still like to see more pcie lanes.. 1 16 slot is fine, but having 2 or 3 4x slots would be good as well, but more lanes to utilize the usb, sata amd m.2 slots with out things having to be disabled if something is used... is crap
More PCIe lanes does not come cheap . you can move to Threadripper platform if you need more lanes.

Ryzen are the entry level and they cut alot of things to reduce the price for the mainstream ...

dual channel memory instead 8 or 4 , 16+4 lanes instead of 64 or 128 , plus the CPU Price as well .

If they add more lanes you will pay $500 more (extras between CPU price and Motherboard price increases.)

I myself wish they make them 32 lanes so I can use both Gaming card and Quadro card at the same time , but I understand what they are doing.
 
Just make the Threadripper cheaper please ?
I'd also like at least a three tiered approach or one that allows better cascading, e.g. by making the IODs chiplets, too.

The EPYC will continue to push higher, 12 channels might become even more there and with dual DIMM per channel costing so much in throughput a broader adoption of four or six channels on the desktop seems really a niche that needs a filling with product that won't empty all your savings.

I really like the fruity cult's approach here, which adds memory channels per compute die, if they didn't change HBM prices for (LP-)DDR chips, plain usury for storage and it ran Linux properly, I could see myself going there.

For the AM5 socket, there nothing that can be done, but as their volumes rise, so rises their ability to add tiers and IOD variants through scale, still with only one CCD to serve them all, with or without V-cache.
 
Please no! I want proper GPU, not useless NPU.
Leave the NPUs for stupid "AI PC" laptops.
Well, I'd say that a proper GPU can do NPU things on the side and believe that the current breed of NPUs will just remain dark silicon until their retirement.

There has been plenty other IP blocks e.g. the GNA bits on Intel Tiger Lake, that never did anything useful for any consumer as far as I know, yet were marketed as indispensable Intel advantage back in the day.

Plenty of transistors in IPUs, DSPs on AMD, too, which might see use on their console brethren, but never see any use on PCs.

As long as NPUs can be disabled and I don't have to pay extra, I don't mind them being there for the extra cooling surface.

And perhaps bigger caches deliver more than extra GPU cores when memory bandwidth is the greater obstacle.
 
  • Like
Reactions: usertests
Oh really ECC already supported? Can I replace the current non-ECC memory modules by ECC ones? Hopefully they are pins compatible.
Yes, been using it on Zen 3 DDR4 and Zen 4 DDR5
I don't think cost would be a concern. Those who want ECC will pay. People are ready to sell their kidneys to upgrade CPU with maximum cores, buy GPU at exorbitant prices. Paying a bit extra for ECC is negligible. In fact I checked on eBay that DDR4 ECC memory costs is similar to non-ECC. They just don't have fancy heat sinks.
ECC used to be twice the price of non-ECC, while it's just 9 vs 8 memory chips. Pricing is pretty near linear now, which reflects the 8/9 ratio. Heat sinks on DRAM are mostly used to hide the vendors of the RAM chips, especially on low-margin products.
Furthermore, most of people change their phones every year or two. In spite the old one is still working perfectly. Look like people will buy if the manufacturers just sell it. There is so much waste and useless stuffs people buy and throw away. Saving pennies on a useful feature like ECC looks rather a bad choice.
Couldn't agree more, especially considering that the IBM PC in 1981 came with parity for a reason. But it was a little more than pennies and the clones which made PCs affordable scratched that as one of the first things.

At the current prices, I just buy ECC for peace of mind. Have I ever observed corrupted data because a RAM bit flipped? No, but perhaps I just wasn't looking hard enough.
Maybe there is a deeper reason why ECC is not implemented by default.
There are plenty of reasons, why ECC is not a default. Most of them center around charging a lot extra for little extra effort.

All x86 CPUs Pentium and up support ECC for internal caches and buses, external and internal memory controllers do, too: the extra couple of transistors requried to support that logic might have been significant back when the Pentium launched.

These days eliminating them from a library CPU designers just cut & paste and validating the result would be more effort than just leaving them in: physically it's always there.

Intel for the the longest time couldn't resists charging a hefty premium for actually enabling ECC support, charging for Xeon branded CPUs and workstation class chipsets, that were otherwise identical silicon chips, just different sets of burned fuses.

In the case of the W* chipsets the situation has been especially galling, because Xeon CPUs, who actually do all the ECC work with their embedded memory controllers (since Nehalem?), were relatively easy to obtain, while W* class mainboards, who are really not involved in any of the ECC work and act only as a "license token" for ECC, were very hard to get and often badly badly made.

AMD has consistently seen ECC more as an opportunity to undersell Intel without any extra effort and enabled ECC when it didn't cost them extra.

Extra cost would be extra pins/lines on sockets, BGA and circuit boards as well as BIOS/driver support and validation.

Especially the latter is still the issue today: mainboard vendors often won't disable ECC but offer no certification of it actually working with proper reporting on your operating system of choice.

If you operate production systems you really want to know if RAM defects are a regular occurance even if they could be corrected, so that you can swap out failing parts.

Where AMD has been following a different path that I find much harder to investigate or explain, are the APUs and non-desktop chips. The AM4/AM5 sockets definitely have the pins, and matching mainboard typically also the traces to support the extra lines for ECC. Yet AM4/5 APUs won't support ECC unless you purchase the "Pro" parts, which have ECC support fused enabled. But those sell via different channels and often cost a bit extra, as a socketed variant.

In notebooks with soldered RAM there is no way to add ECC even if the APU is a Pro part, and Pro there typically just enabled remote management, which is another feature AMD segments in a rather Intel fashion.

But from what I've been able to gather the different variants of BGA sockets Zen 4 APUs were produced in, also vary in their ability to support ECC, because the extra lines might actually be missing in parts destined for the thinnest and lightest laptops, while they are present on embedded and industrial designs: note this is my read on the situation, I'd love to learn actual facts there!

Sorry I couldn't shed more light on this, unfortunately a lot of AMD materials isn't publically available and the press doesn't seem to care or know much about that topic.
 
It makes sense considering Intel’s hybrid approach giving them an edge in core count; 16 cores is not gonna cut it for the top offering in 2026.
Agreed; moreover, since AMD has dense cores now, it also makes sense having a dual Zen 6c CCD CPU, along with a hybrid design of Z6 and Z6c. These sort of high-core-count, non-server-class CPUs make for great cost-effective servers in several use-cases, such as gaming servers.
 
This is probably the biggest sore in the current crop of AM5 chipsets. I can’t believe motherboard manufacturers are boasting about having three extra PCIe 4.0 M.2 slots, when they are all funneled through a single PCIe 4.0 × 4 link, shared with all the SATA, USB, LAN, and Wi-Fi traffic. It’s not even unrealistic that one might occasionally utilize all of the bandwidth available and bottleneck one thing or another.

It’s the same situation with Thunderbolt 5, the only implementation being a controller with a PCIe 4.0 × 4 (nominally 64 Gb/s) host interface, which falls short supporting the even basic 80 Gb/s of data (a.k.a., no video) traffic of a single Thunderbolt connection.
I'd say the mess is really partially PC history, partially AMD telling manufacturers what to do, partially user expectations.

And the real challenge is to redo PC design in a saner manner.

AMD's IODs really just give you 24 lines of PCIe from their mainstream SoCs. And it's up to the mainboard manufacturer to provide an attractive offer at the other end of that.

What I like so much about my Minisforum BD790i is that it turns all of that into 24 PCIe v5 lanes, no "chipset" that doesn't come from the IOD directly.

Where it still falls a bit short is that it doesn't just give me 7 M.2 or OCulink connectors which I can bundle or split and use as I see fit with a granularity of 1-24. I still had to add a bifurcation or riser cards to do the split for me on the main x16 slot.

If I then want to put an ASmedia controller or similar as "chipset" on the other end of M.2 or OCulink to split out SATA/USB/TB whatever, is something I want to decide in a good old PC manner, where the power was in the slots not on the mainboard, which was sometimes even a passive backplane, like in S-100 days.

And if ASmedia or any other PCIe switch allows me to cascade that, well that's just the magic of PCIe working for you, not against you.

It requires the GPUs to perhaps offer multiple M.2/OCulink connectors (or a matching riser cable) so you can choose how many lanes you want to have it actually use, depending on what else you might want to put in there.

E.g. these days a single PCIe v5 lane should be plenty for 10Gbit Ethernet NIC, yet most of the time you sacrifice 4 lanes on a PCIe v2 chip: pathetic!

Slots became popular because they were cheap, yet mainboard PCB traces for v5 speeds on x16 slots might actually be a bigger challenge than 4 ribbon cables with 4 lanes each, which you can route 3D in your chassis: I believe a lot of server designs escape to cables because PCBs become too limited and tough.
 
MLID’s track record is verifiable. They have been correct about a lot of stuff and have publicly acknowledged when they got stuff wrong.
Both are not mutually exclusive, even if I can't remember laughing much.

And if influencers cannot entertain, they do worse.

So far, I like Wendel best in terms of presentation, competence and humor, but de gustibus non est disputandum.
 
I could see a 12 core CCD on the horizon, but my concern becomes the cost of entry factor. The lowest available consumer chips have launched at relatively high prices since Zen 3 and now those core counts would be higher. While I'm sure there's plenty of margin on all of these parts it would mean AMD eating into them a bit to keep current pricing given a newer node and larger die.

The other concern comes in at the top end with the question of memory bandwidth. Currently they're supporting 16 cores on 5200/5600 and this would go up to 24 cores on 6400 most likely (though with clock drivers having better availability maybe 7200). That means a 50% core count increase while adding 14-29% bandwidth. It's not guaranteed that this will be a problem, but it was starting to be an issue with Zen 3 (resolved with the move to DDR5 with Zen 4+).

Overall moving the 8 core parts to 12 would be a pretty nice MT benefit, and the lower dual CCD part would no longer be hampered by 6 core CCDs so that's good as well. It's just at the low and high end where questions remain.

Of course without more PCIe lanes or at least a PCIe 5.0 link to the chipset(s) the productivity improvements brought with higher core counts is still lacking.
I agree that bandwidth is important for large data set workloads, but a lot of workloads can fit inside local cache so some multithreaded tasks will see a proportional improvement to the increased core counts. And right now the highest JEDEC DDR5 bin is 8800mhz which is 57% greater than 5600mhz.
 
Agreed; moreover, since AMD has dense cores now, it also makes sense having a dual Zen 6c CCD CPU, along with a hybrid design of Z6 and Z6c. These sort of high-core-count, non-server-class CPUs make for great cost-effective servers in several use-cases, such as gaming servers.
I will believe in 32-core Zen 6c on AM5 when I see it, but it could be a great, relatively cheap option for those niche use cases like home servers.

AMD could be distancing itself from hybrid. Zen 6 APUs using a 12-core CCD shared with the desktop CPUs would be a repudiation of hybrid in mobile chips. AMD has an easier time with scheduling non-C/C than Intel's P/E/LPE cores, but it's still more complicated when different clusters have different amounts of cores and cache (similar to dual-CCD with single-CCD 3D V-Cache). Strix Point's 4+8 monolithic design could be a single generation anomaly necessitated by low confidence in TSMC's 3nm node. 12 cores in a single CCX sharing the same cache will be great.

If the second leak is correct, APU performance cores would effectively have triple the L3 cache. Strix Point has 16 MB for the four fast cores, that would go to 48 MB shared by twelve. Really good for laptop gaming.
 
There has been plenty other IP blocks e.g. the GNA bits on Intel Tiger Lake, that never did anything useful for any consumer as far as I know, yet were marketed as indispensable Intel advantage back in the day.
Intel said GNA was about ambient AI, like presence detection. It was low power, low-throughput. I don't know if it could handle video tasks, like background removal from a video conferencing session.

Plenty of transistors in IPUs, DSPs on AMD, too, which might see use on their console brethren, but never see any use on PCs.
Really? I haven't noticed them on the annotated die shots I've seen. Can you refer us to one?

As long as NPUs can be disabled and I don't have to pay extra, I don't mind them being there for the extra cooling surface.
What if AI upscaling is offloaded to it, from the GPU?

NPUs can be used concurrently with GPUs, for inferencing. The reason they exist is because they offer better performance/W and performance/mm^2 at inferencing than GPUs.
 
Agreed; moreover, since AMD has dense cores now, it also makes sense having a dual Zen 6c CCD CPU, along with a hybrid design of Z6 and Z6c. These sort of high-core-count, non-server-class CPUs make for great cost-effective servers in several use-cases, such as gaming servers.
Have you heard that AMD is now selling EPYC-branded desktop processors? They are for just such applications.
 
"Tray" SKUs are never sold directly to the public. If you see those for sale, it's because someone bought them in bulk and is just packaging & selling them individually, on their own. This is how I got the Xeon I used to build my workstation, a dozen years ago - some Intel reseller had just decided to offer that CPU to end users. I bought one and it arrived in whatever ad hoc packaging they decided to use.
I've seen tons of Intel's, but never AMD's which could be down to not checking places like ebay and Intel's sheer volume.
Oh man I remember that article, but somehow totally forgot they ran it through the performance testing.
At work, we have a variety of machines that are running 24/7 medium-duty workloads. I found one sad, old machine (Dell R710 server that was 10 years old, at the time), which actually had only one CPU installed, but two out of either four or six ECC DIMMs that were flagging memory errors - and one of them even had uncorrectable 2-bit errors! I forget if they were UDIMMs or RDIMMs, as those machines actually supported both.

Sadly, not all of these machines are using ECC memory, for reasons I can't go into (but related to cost). So, like I said, I've had the experience of trying to debug software crashes and, upon examining the backtraces (which were fairly nonsensical), decided to test the memory and confirmed that it was bad. These machines all were stable when deployed, which means the memory must have degraded. It's rare, but it happens. I can't put an exact percentage number on it, but probably in the lower single-digits.

DRAM is known to degrade with use. You can find some information on its wear rates, but probably nothing current. DRAM technology (i.e. circuits, cell designs, etc.) haven't fundamentally changed in a long time, AFAIK.
Oh that sounds like a troubleshooting nightmare. My server boxes have tended to last the longest and of course be on 24/7. My last two were ~8 years (PIII-S) and ~12 years (SNB), but there also isn't a ton that hits the memory hard on those which I imagine minimizes wear.