News Intel Roadmap Leaks: Raptor Lake Refresh, HEDT Replacement in 2023

Feb 4, 2022
4
0
10
A leak or the start of a planned flood. Every where we go, we hear of a leak here and a leak there, it's like the watergate scandal. New product leaks are normal, my cup has been filled to the brim one drop at a time. Is it news or just advertising. Sorry, I am not picking on INTEL, they are doing the picking. This is not a leak, but the way it is.
 

bit_user

Titan
Ambassador
This told me what I needed to know, which is that the W680 will not be replaced. That's a little unfortunate, but at least I don't have to wait for another generation of motherboards to launch.

Now, there's just the issue of ECC DDR5 UDIMMs: price & availability. ...or, I guess that's backwards, as you have to be able to find it in stock, before you can even think about pricing.
 
This told me what I needed to know, which is that the W680 will not be replaced. That's a little unfortunate, but at least I don't have to wait for another generation of motherboards to launch.

Now, there's just the issue of ECC DDR5 UDIMMs: price & availability. ...or, I guess that's backwards, as you have to be able to find it in stock, before you can even think about pricing.
Is it me or they're looking to compete with AMD's upper workstation class instead of bringing back "the middle-ground"?

Still, even if they're able to put just a tiny bit of pressure on AMD, they may lower prices? Maybe?

Regards.
 

TJ Hooker

Titan
Ambassador
A little surprised that Intel's first desktop chips based on Intel 4 (meteor lake-S) are seemingly delayed, despite them bragging only a few days ago about Intel 4 being ready and on track for high volume in 2023.
 
  • Like
Reactions: bit_user

truerock

Distinguished
Jul 28, 2006
329
48
18,820
Intel's Raptor Lake to stay for long as Intel kisses goodbye to the HEDT platform.

Intel Roadmap Leaks: Raptor Lake Refresh, HEDT Replacement in 2023 : Read more

This does not surprise me.
There were rumors that Meteor Lake would have 6-power-cores which is less than Raptor Lake's 8-power-cores.
There were definitely some issues going on with Meteor Lake.
I'm guessing Intel decided to skip the 6-power-cores Meteor Lake in 2023 and start with an 8-power-cores Meteor Lake in 2024.

So, what is the deal? Is there some problem with getting 8-power-cores on Meteor Lake?
Maybe there was a heat/thermal issue with the new smaller cores being close together?

Also, maybe the issue of efficiency-cores for desktop PCs will finally just go away?
Sure, efficiency-cores are needed on smart phones where battery power is a primary issue. Even some categories of notebook computers take battery power as a primary design issue.

E-cores and integrated-graphics on a CPU chip for desktop PCs is not a particularly logical thing (except for very low-end desktop PCs which can be built with notebook PC tech).

High-power RISC cores to complement high-power complex CPU cores might be advantages for desktop PCs.
Low-power, energy-efficient cores only make since for mobile computing using battery power.
 
Last edited:

bit_user

Titan
Ambassador
So, what is the deal? Is there some problem with getting 8-power-cores on Meteor Lake?
Meteor Lake is their first mass market tile-based CPU. Could be something to do with that?

Also, maybe the issue of efficiency-cores for desktop PCs will finally just go away?
Not a chance.

E-cores and integrated-graphics on a CPU chip for desktop PCs is not a particularly logical thing (except for very low-end desktop PCs which can be built with notebook PC tech).
E-cores aren't only about power-efficiency (which, don't kid yourself, is definitely a limiting factor for desktop), but also area-efficiency. And perf/mm^2 also determines perf/$. So, there's not a chance Intel is going to withdraw hybrid CPUs from the desktop.

The way I read their decision to reduce the P-cores of Meteor Lake is that the next-gen P-cores are probably a lot bigger and the future E-cores are probably even closer in performance to the P-cores. If the E-cores are fast enough, then you only need enough P-cores to get good performance on lightly-threaded workloads. For everything else, a sea of E-cores would be the main horsepower of the CPU.
 

bit_user

Titan
Ambassador
Only good thing I'd want out of an INTEL HEDT....is AMD to compete at it and re-release non pro TR again.
Yeah, I'd like to see them release a smaller socket with 4 memory channels and 64 PCIe lanes, because platform cost/affordability is also an issue. It'd be fine it it supported only 32 or 48 cores.

We ought to get full ECC support, though. AMD needs to quit playing games around ECC, in general.
 
  • Like
Reactions: hotaru251
Yeah, I'd like to see them release a smaller socket with 4 memory channels and 64 PCIe lanes, because platform cost/affordability is also an issue. It'd be fine it it supported only 32 or 48 cores.

We ought to get full ECC support, though. AMD needs to quit playing games around ECC, in general.
AMD has always supported ECC in all of the CPUs and chipsets. It's always been up to the motherboard vendors to include it. In fact, there's already a few X570 and X670 with full ECC support out there. They've never made it "workstation" exclusive.

Regards.
 

bit_user

Titan
Ambassador
AMD has always supported ECC in all of the CPUs and chipsets.
Nope. Their APUs did not, unless you got a Ryzen Pro version.

Furthermore, there have been rumors about limitations on non-Pro Ryzens' ability to report ECC errors. What's certain is that AMD themselves doesn't validate ECC functionality of non-Pro Ryzens. I think there are a few more caveats on non-Pro Threadrippers, but I haven't followed that situation too closely.

Basically, AMD has been playing too many games with ECC.

It's always been up to the motherboard vendors to include it. In fact, there's already a few X570 and X670 with full ECC support out there.
Try actually looking at the motherboard docs for a X570 board and they will tell you that non-Pro Ryzen APUs don't support it.
 
Nope. Their APUs did not, unless you got a Ryzen Pro version.

Furthermore, there have been rumors about limitations on non-Pro Ryzens' ability to report ECC errors. What's certain is that AMD themselves doesn't validate ECC functionality of non-Pro Ryzens. I think there are a few more caveats on non-Pro Threadrippers, but I haven't followed that situation too closely.

Basically, AMD has been playing too many games with ECC.
--
Try actually looking at the motherboard docs for a X570 board and they will tell you that non-Pro Ryzen APUs don't support it.
Fair. I took it as you meant normal CPUs and not including the APUs. I've never considered APUs really need ECC, personally, but technically not wrong.

I will put on doubt how important it is for an APU to have full ECC support, though. I'd say it's a non issue as long as CPUs in the same platform do support it in full.

Other than that, I could not find anything regarding ECC and ThreadRipper that was like an asterisk for its support. Care to share links?

Regards.
 

bit_user

Titan
Ambassador
I will put on doubt how important it is for an APU to have full ECC support, though.
For most business uses, there's no need for a dGPU. So, they can often get away with using APUs just fine.

ECC is important whenever you're dealing with data where errors or downtime are costly. That's why Ryzen Pro APUs have it (and the Pro's have other features that make them seem very enterprise-oriented). I wouldn't even take issue with it being limited to the Pro models, if they simply offered Ryzen Pro APUs to non-OEM customers.

I'd say it's a non issue as long as CPUs in the same platform do support it in full.
Not sure what you mean by that.

Other than that, I could not find anything regarding ECC and ThreadRipper that was like an asterisk for its support. Care to share links?
Sorry, I've just heard people grumbling about the status of ECC support on the non-Pro Threadrippers. I didn't pay attention, because I'm not in the market for one. I have no further details to share.
 
For most business uses, there's no need for a dGPU. So, they can often get away with using APUs just fine.

ECC is important whenever you're dealing with data where errors or downtime are costly. That's why Ryzen Pro APUs have it (and the Pro's have other features that make them seem very enterprise-oriented). I wouldn't even take issue with it being limited to the Pro models, if they simply offered Ryzen Pro APUs to non-OEM customers.
I mean. I'm pretty sure whoever needs ECC knows exactly why and what they need it for. I have no clue how much market a "pro" APU with ECC has, but keep in mind Intel does not have any vPro mobile CPUs that support ECC.

Here's the proof:

https://ark.intel.com/content/www/u...pe=873&0_StatusCodeText1=3,4&0_ECCMemory=True

Ironically, Atom CPUs did support ECC. Big shrug there.

Not sure what you mean by that.
As I mentioned above: how big would that market be anyway? As I said, you're technically right, but it's moot if there's 0% market from both AMD and Intel that has mobile parts with ECC. And that seems to be the case, so it's kind of moot to include it in the argument.

Sorry, I've just heard people grumbling about the status of ECC support on the non-Pro Threadrippers. I didn't pay attention, because I'm not in the market for one. I have no further details to share.
Hm... It's unfair to mention it then. As far as I know, there's been no issues with the support itself. This being said, it's been a pain to make all memory kits work with Zen-based CPUs, so I'll assume those referred to that, instead. Maybe.

Regards.
 

emike09

Distinguished
Jun 8, 2011
193
190
18,760
I'm just looking for quad+ channel memory and 48 PCI-e lanes at a minimum, as well as fast clock speeds and overclocking of both the CPU and RAM. Clock speed matters more to me ATM than massive core counts. 16 P cores running around 5GHz+ would be just fine.
 

bit_user

Titan
Ambassador
You shrug because you haven't noticed what they've done with Atom branding. If you look closer, the Atom-branded CPUs are all for embedded use cases, rather than consumer platforms. That explains why they have ECC.

I guess they had more success with early Atoms in embedded applications, which is why they kept the branding for that market. In consumer markets, Atom quickly became synonymous with bottom-tier.

As I mentioned above: how big would that market be anyway? As I said, you're technically right, but it's moot if there's 0% market from both AMD and Intel that has mobile parts with ECC. And that seems to be the case, so it's kind of moot to include it in the argument.
I've seen laptops offering Ryzen Pro APUs. Not sure if any also support ECC memory, however.

Hm... It's unfair to mention it then.
Not unfair to mention; just shouldn't be taken as a supporting argument.

As far as I know, there's been no issues with the support itself. This being said, it's been a pain to make all memory kits work with Zen-based CPUs, so I'll assume those referred to that, instead. Maybe.
The comments I've seen didn't reference kit compatibility. They definitely had to do with platform support. Anyway, I mention it so that anyone interested in the topic would know to investigate for themselves.
 

truerock

Distinguished
Jul 28, 2006
329
48
18,820
Meteor Lake is their first mass market tile-based CPU. Could be something to do with that?


Not a chance.


E-cores aren't only about power-efficiency (which, don't kid yourself, is definitely a limiting factor for desktop), but also area-efficiency. And perf/mm^2 also determines perf/$. So, there's not a chance Intel is going to withdraw hybrid CPUs from the desktop.

The way I read their decision to reduce the P-cores of Meteor Lake is that the next-gen P-cores are probably a lot bigger and the future E-cores are probably even closer in performance to the P-cores. If the E-cores are fast enough, then you only need enough P-cores to get good performance on lightly-threaded workloads. For everything else, a sea of E-cores would be the main horsepower of the CPU.

Good point... the size of P-cores may be the issue. I hadn't thought about that.

Regardless, low-power-RISC cores (E-cores) seem like a dumb idea for a desktop PC. RISC cores can be very effective in certain applications. I see no reason to use RISC cores as a power-saving thing in a CPU for a desktop PC that doesn't need to worry about battery power. High-power RISC cores have all the advantages of low-power RISC cores - but, with better performance.
 

bit_user

Titan
Ambassador
Regardless, low-power-RISC cores (E-cores) seem like a dumb idea for a desktop PC. RISC cores can be very effective in certain applications.
I'm not sure why you're calling them RISC cores... I think that's not very accurate and more likely to cause confusion than shed insight. Athough they internally use micro-ops, I've read that those micro-ops don't strictly meet the definition of a RISC instruction set.

I see no reason to use RISC cores as a power-saving thing in a CPU for a desktop PC that doesn't need to worry about battery power. High-power RISC cores have all the advantages of low-power RISC cores - but, with better performance.
First, did you know that putting 2 threads on 2 E-cores results in more performance than having them share a single P-core? The E-cores in Alder/Raptor Lake are probably faster than you think.

Next, consider that the 2 E-cores use only about half the area of that P-core, and you will quickly see that the most area-effective way to scale up performance (and therefore also the most cost-effective) is by adding E-cores, not P-cores.

Finally, have you heard of "all-core boost speed", and do you know why it's lower than single-core boost speed? That's right: power & thermal limits. Modern, high-end desktop CPUs are limited by how much head the thermal solution can dissipate, which is proportional to the power consumed. So, even though they don't have to worry about batteries, desktop CPUs are still constrained by power. The E-cores are much more power-efficient than the P-cores, making it a more power-efficient way to scale performance.
 

truerock

Distinguished
Jul 28, 2006
329
48
18,820
Yes, you make valid points.

I get it... CPU design is a complex issue of balancing core designs of different instruction sets and power utilization.

Designing a CPU involves assumptions regarding what types of applications it will be used to run. That's why - for example - Intel sells HEDT CPUs without integrated graphics processors. Every desktop PC CPU I've ever purchased from Intel in the last 8 years (or so) has an integrated graphics processor that has never been used. I assume almost all of the Intel HEDT CPUs with integrated graphics are never used. Why does Intel sell HEDT CPUs with integrated graphics?

You make valid arguments, but based on my understanding of the issue, low power cores do not belong on Intel's HEDT CPUs - just like integrated graphics do not belong on Intel's HEDT CPUs.

As an FYI... I'm using terms like RISC and HEDT very loosely to convey ideas in an inexact but efficient way. I don't really want to engage in pedantic discussions of the exact meaning of RISC and HEDT.
 
Last edited:

bit_user

Titan
Ambassador
I get it... CPU design is a complex issue of balancing core designs of different instruction sets and power utilization.
In this case, they're all x86 cores. Or, perhaps you're referring to which ISA extensions (like AVX-512) they support?

Designing a CPU involves assumptions regarding what types of applications it will be used to run.
Yes, also lightly-threaded vs. heavily-threaded, I/O-heavy vs. compute-heavy, integer vs. floating point, scalar vs. vector, strong locality vs. weak. The points about threading & locality impact decisions about cache size, memory bandwidth, and NUMA topology.

Every desktop PC CPU I've ever purchased from Intel in the last 8 years (or so) has an integrated graphics processor that has never been used.
If you buy a KF-series CPU, the iGPU is disabled (presumably, some of them are defective), which gives you a tiny more clock speed budget in the CPU cores. They also tend to be a little bit cheaper, but mostly to an extent that only OEMs would care about.

I assume almost all of the Intel HEDT CPUs with integrated graphics are never used. Why does Intel sell HEDT CPUs with integrated graphics?
The only Xeon models with an iGPU are the ones that are essentially the same as mainstream processors, but with a few "pro" features enabled. And not even all of those have it! Check all of these, if you want, and you'll see they all have a mainstream socket like LGA1200 or LGA1700:



What most people mean by HEDT are the CPUs with a non-mainstream socket, and those never had an iGPU. You can see this in Intel's online database entries for them. If you adjust the above query to exclude processors with graphics, you'll find nearly all of the hits use a LGA2066, LGA3647, or LGA4189 socket.

based on my understanding of the issue, low power cores do not belong on Intel's HEDT CPUs - just like integrated graphics do not belong on Intel's HEDT CPUs.
Well, the E-cores aren't currently (or even about to) be offered in a HEDT CPU, so maybe that puts your mind at ease. However, I think there's an argument for doing it (if we overlook the lack of AVX-512), because the E-cores scale up better and HEDT are mostly about heavily-threaded applications.

As for the lack of iGPUs, that's mostly because the HEDT products started out as an offshoot of the server CPUs, which don't have an iGPU. Also, the cost of adding a dGPU is not a big deal for HEDT users, so Intel would rather use the entire CPU die for compute.

As an FYI... I'm using terms like RISC and HEDT very loosely to convey ideas in an inexact but efficient way. I don't really want to engage in pedantic discussions of the exact meaning of RISC and HEDT.
We need common definitions, if we're to have a meaningful discussion. I don't know what you're using loosely, or how loosely. As for RISC, you could just delete it from your previous post with no detriment. It wasn't adding anything, which made it seem all the more confusing.

As for HEDT, that's a market segment generally defined by socket, memory channels, and core count. The socket is bigger than a mainstream socket, the core counts tend to be higher, there are usually at least 4 memory channels (enabling at least 2x the amount of RAM, as well as more bandwidth) and they have more I/O lanes, as well. They also cost a lot more, usually starting near $1000 and going as high as about $5000 or so.
 
  • Like
Reactions: truerock

truerock

Distinguished
Jul 28, 2006
329
48
18,820
The only Xeon models with an iGPU are the ones that are essentially the same as mainstream processors, but with a few "pro" features enabled. And not even all of those have it! Check all of these, if you want, and you'll see they all have a mainstream socket like LGA1200 or LGA1700:


Thank you for bringing up Xeon. I normally do not follow Xeon CPUs (although I once had a job delivering high-end Xeon workstations to scientists).

I read a Tom's article recently that covered the latest Intel Xeon CPUs and chipsets, and I had this odd thought that they were providing specs that are more what I like than the Core CPUs and chipsets. I sort just thought it was an unrealistic, over-the-top idea.

But, your feedback has pointed me into thinking more seriously about making Xeon may next PC build. Is that a realistic idea - to use Xeon for a home PC build where the cost is not an important factor? I've got some serious googling to do on that idea.

Thanks - I really have enjoyed your feedback.
 

truerock

Distinguished
Jul 28, 2006
329
48
18,820
I have had time to do some reading on Intel Xeon processors today.

I want to thank you again for directing my attention to Xeon.

Understanding Xeon helps me better understand Intel's overall CPU strategy.

I think there is a possibility I will be replacing my 10 year old Windows 10, Intel Ivy Bridge PC with a Windows 11, Intel Xeon "workstation". It will be interesting to see what Xeons are delivered by Intel next year and whether Xeon will be a good platform for Windows 11.