News Intel Raptor Lake Refresh, Arrow Lake CPU Performance Projections Leaked

The trio of processors appear to share the same core configuration: 8+16+1
What's the last "+1"? Since it's in the leaked slides, it doesn't seem to be a typo.

The leaked charts reveal that Arrow Lake could be up 3% to 21% faster than Raptor Lake, with significant performance increases in multi-core operations.
Yes, it's worth noting exactly where those lower-bounded results come from:
  • SPEC2017 int @ 1 thread: 4% to 8%
  • SPEC2017 fp @ 1 thread: 3% to 6%

So, that tells us the IPC gains should probably be less than that, because it'll presumably clock higher. Or, maybe it clocks lower, but with a bit more IPC than that. Either way, it's looking like IPC gains of Lion Cove (Arrow Lake's P-core) should be fairly modest.

I think the interconnect and cache architecture were really holding back the E-cores, on Raptor Lake. So, presumably improvements there are behind some of the multithreaded gains. Another contributing factor could be the Skymont E-cores delivering a bigger generational improvement over Gracemont, than Lion Cove did over Raptor Cove. That wouldn't show up in the single-threaded results above, since those tests should be running a P-core.


...and if we turn our attention to those same SPEC2017 results on Raptor Refresh, it's telling that they scale almost identically between the single-threaded and multi-threaded cases. That suggests Raptor Refresh is almost purely a frequency-scaling exercise.
 
Last edited:
  • Like
Reactions: atomicWAR
What's the last "+1"? Since it's in the leaked slides, it doesn't seem to be a typo.
I came here to ask the same question. what's the +1 on the 8+16+1?
It's idiotic to count it with the normal cores though...
The impending doom comes from the Intel Management Engine (ME), a subsystem that uses a 32-bit Argonaut RISC Core (ARC) microprocessor that's physically located inside the x86 chipset.
 
  • Like
Reactions: DrMaxP
What's the last "+1"? Since it's in the leaked slides, it doesn't seem to be a typo.

It refers to the "Graphics Tile" which contains the first iteration XE 1 Alchemist graphics, iGPU, which is embedded within the graphics tile/GT. So they are referring it as GT1. Where 1 denotes the Xe1 Alchemist GPU.
  • Xe1 - Alchemist GPU
  • Xe2 - Battlemage GPU
  • Xe3 - Celetistal GPU
They shouldn't have mentioned it with the core count config though. This is just a temporary way to represent it in this preliminary slide from Intel.
 
Last edited by a moderator:
Intel desktop has such a long tail compared to mobile for updates once again, we were on Skylake to Comet lake forever, got a sidegrade in Rocketlake, and we are on 2-3 generations of stagnation from Alder to Raptorlake. Gluing their chips together😀 should fix that so hopefully we see a better cadence of actual Desktop CPU updates. I guess it is unrealistic to expect the past to repeat but Intel use to be on track with tick tock designs(Conreo/core all the way to skylake, pumping out innovation and improvements in an 18 month or so cadence), now the release cadence is very muddy. 5 nodes in 4 years, sure... But what does that actually translate to the consumer market.

Mobile has been a tad bit better although the chips themselves are underwhelming, Ice Lake, Tiger Lake... We'll see about Meteor Lake? Rumors suggest it will follow the prior 2 as mobile only release, Maybe we see a few embedded desktop designs.
 
  • Like
Reactions: atomicWAR
Pretty Meh IMHO. But that's to be expected beating a 'dead horse' like alder/raptor lake as its clear Intel needs to move on to a new architecture at this point. Though to be fair it was impressive at launch. Just not so much now for a 'new' chip.
 
1851 pins on the next socket up. At this rate, we're one more socket generation away from mainstream platforms out-pinning quad-channel HEDT platforms from a decade ago, without the extra PCIe and DRAM channels.
 
Last edited:
  • Like
Reactions: bit_user
That would seem to be the next step, but I doubt it can be justified as long as mobile and desktop parts share an architecture.

AMD could easily do it, but it would get confusing as I think any single CCD chip would be limited to dual channel. And then you would either raise the minimum motherboard price or have to limit quad channel to X boards or something.
 
Given that intel lies on products launches, imagine how much more they would on rumors.

Anyways, im done with them and glad that AMD is trashing them left and right.
 
AMD could easily do it, but it would get confusing as I think any single CCD chip would be limited to dual channel. And then you would either raise the minimum motherboard price or have to limit quad channel to X boards or something.
Dual-channel DDR4 was enough for earlier 16-cores CPUs, dual-channel DDR5 at now more affordable 6400+MT/s speeds combined with larger caches is likely going to be good enough until mainstream platforms hit 32 cores. I'm not foreseeing a need for quad-channel to to become mainstream any time soon and cost concerns make this idea a no-go.

My money is on on-package DRAM becoming the norm for primary system memory before mainstream needs more than dual-channel bandwidth, especially if back-side power really does become a thing: once you can put millions of TSVs into chips for power, 2000 more for direct-stacked HBM-like memory becomes trivial.
 
  • Like
Reactions: Eximo
I still kind of want a GDDR motherboard or HBM. Intel's brief cooperation with AMD on NUCs was promising. Wouldn't have minded a laptop with that chip in it.

I understand you can get the crippled PS5/Xbox CPUs on a motherboard with GDDR, but the CPUs aren't that impressive for desktop use.
 
we are on 2-3 generations of stagnation from Alder to Raptorlake. Gluing their chips together😀 should fix that so hopefully we see a better cadence of actual Desktop CPU updates.
Except evidence is to the contrary. Their first "glued-together" chip was Meteor Lake and that was supposed to be the Gen 14 desktop CPU - not Raptor Refresh.

Intel use to be on track with tick tock designs(Conreo/core all the way to skylake, pumping out innovation and improvements in an 18 month or so cadence),
You conveniently forgot about how we got a Haswell Refresh instead of Broadwell on the desktop.

now the release cadence is very muddy. 5 nodes in 4 years, sure... But what does that actually translate to the consumer market.
I wonder how much of it is simply a matter of naming. Like, in the old days, would Inte 3 actually just have been called Intel 4+? And would 18A just have been 20A+?
 
Last edited:
Need triple or quad channel :)
Technically, 128-bit DDR5 is quad-channel.
; )

AMD could easily do it, but it would get confusing as I think any single CCD chip would be limited to dual channel. And then you would either raise the minimum motherboard price or have to limit quad channel to X boards or something.
Why do you think that? The single CCD <-> IOD bandwidth is well more than enough to support 256-bit DDR5, if there were a good reason to.
 
Last edited:
1851 pins on the next socket up. At this rate, we're one more socket generation away from mainstream platforms out-pinning quad-channel HEDT platforms from a decade ago, without the extra PCIe and DRAM channels.
Intel did actually add some PCIe lanes relative to Sandybridge. They're up to 28 CPU lanes (including DMI), 16 of which are PCIe 5.0 and the rest are 4.0. It's actually not bad. The low point was Haswell/Skylake, which had only 20 CPU lanes (including DMI).

dual-channel DDR5 at now more affordable 6400+MT/s speeds combined with larger caches is likely going to be good enough until mainstream platforms hit 32 cores.
In the recent TechPowerUp interview with AMD VP of client computing, he said they haven't moved past 16 cores for memory bandwidth reasons.


That same logic is clearly at play in their decision to go up to 12-channel for 96 core EPYC-Genoa. It's the same ratio.
 
1851 pins on the next socket up. At this rate, we're one more socket generation away from mainstream platforms out-pinning quad-channel HEDT platforms from a decade ago, without the extra PCIe and DRAM channels.
Now you understand why I'm a advocate for OMI & a serialized connection to the memory controller.

You'd take back ALOT of that pin-count and use it for PCIe and other stuff.
 
In the recent TechPowerUp interview with AMD VP of client computing, he said they haven't moved past 16 cores for memory bandwidth reasons.
Yeah, it definitely has nothing to do with that 16 cores already need 230W and can't be cooled with anything less than the biggest AIOs or that they wouldn't be able to ask twice the price of a 16 core CPU for it, especially since they already have to sell the 7950x at a discount.

Memory bandwidth is a factor for sure but it's neither the only one or even the biggest one. Even destroying their server business by releasing a desktop priced alternative is a bigger reason than memory.
 
And I'm a fan of putting enough on-package memory that we can do away with external interfaces dedicated to memory altogether, even more pins saved.
At that point, you'd be forced to buy memory from the CPU/SoC vendor who might "Up-Charge" on the price of DRAM.

Look at Apple, nVIDIA, MS, Intel, etc.

Whenever you have DRAM baked into the products, they UPCHARGE the price by ALOT.

WAY more than the actual cost of DRAM.

VS, letting the consumer buy from the market and buy it at a reasonable cost by purchasing DIMMs.

But hey, I know you mean well and all, but I don't think they would ever try to sell you on-package DRAM at a reasonable price with the margins the average consumer thinks is worth, which is barely anything over the BoM cost.

And you see how the market is reacting, they're not going to accept a huge "Upcharge" on DRAM prices.
 
Why do you think that? The single CCD <-> IOD bandwidth is well more than enough to support 256-bit DDR5, if there were a good reason to.
I think I had a misunderstanding on how the CCDs connected to the I/O die. From the block diagrams, yes, it would just be a matter of changing out the I/O die and making a compatible motherboard.
 
I still kind of want a GDDR motherboard or HBM. Intel's brief cooperation with AMD on NUCs was promising. Wouldn't have minded a laptop with that chip in it.

I understand you can get the crippled PS5/Xbox CPUs on a motherboard with GDDR, but the CPUs aren't that impressive for desktop use.

I don't think that you really do. GDDR is bandwidth optimized at the cost of latency. CPU's care much more about latency whereas GPU's can mask high latency to an extent.

Just look at the 4700S review. The latency was much high and if you look at the benchmarks in the following pages, it doesn't bode well.

Except evidence is to the contrary. Their first "glued-together" chip was Meteor Lake and that was supposed to be the Gen 14 desktop CPU - not Raptor Refresh.


You conveniently forgot about how we got a Haswell Refresh instead of Broadwell on the desktop.


I wonder how much of it is simply a matter of naming. Like, in the old days, would Inte 3 actually just have been called Intel 4+? And would 18A just have been 20A+?

There were a few nearly non-existant desktop Broadwells. Only to were even LGA IIRC and it was quickly replaced by Skylake.
 
  • Like
Reactions: bit_user
Yeah, it definitely has nothing to do with that 16 cores already need 230W and can't be cooled with anything less than the biggest AIOs
Correct. Those are not the actual limitations. It's just like how you can burn 1900W with a Xeon W, if you're willing to clock it high enough:


If they had more cores, and could feed them with enough memory bandwidth, they could simply shave a couple hundred MHz off the all-core clock speed and it'd be fine.

As for the AIOs part of your comment, you know very well that the power isn't the issue, but rather their heatspreader. You can still cool it fine with a lesser heatsink and not lose much.

Memory bandwidth is a factor for sure but it's neither the only one or even the biggest one.
Yes, actually it is. DDR5 is the main reason why the 7950X is 45.5% faster than the 5950X, at multithreaded workloads:

x2cMJc3dGBZ6QUfBE2c6VV.png

Source: https://www.tomshardware.com/reviews/amd-ryzen-9-7950x-ryzen-5-7600x-cpu-review/7

Since you like Intel so much, perhaps you'll find the DDR5 advantage on Alder Lake more persuasive:

That's a 31.3% and 37.4% advantage, on multithreaded int and float workloads, for DDR5. Yes, the DDR4 just running at stock speeds, but my point is merely to show how bandwidth-hungry these CPUs get, when all the cores & threads are really cranking. And that CPU has only 24 threads.
 
Last edited:
PL1-2 locked at only 250 watts for marginal gains in synthetic tests. Forget 420 AIOs, these are gonna need 280x280 AIOs with outdoor fountain pumps to drive the coolant through the loop for full unlocked (PL3?) TVB (thermonuclear velocity boost). Remember the 10980XE reviews that showed socket draw at 500+ watts? This is the new standard from our very environmentally conscientious friends at intel. My 5950X literally refuses to break 160-170 watts, yet outperforms any intel platform I've used in the past five years by several orders of magnitude. And my 7950 doesn't want to go further than 220 watts, and is yet another order of magnitude more powerful. I hate sounding like a flag waver but AMD is just crushing it the past few years on the CPU front. GPU front..not so much. Because I don't do stupid things like cram all the guts in a tower then close it like a tin can and know how to actually mount cables the 4090 (MSI Suprim Liquid X) is just absolutely crushing it in like EVERYTHING. AI workloads on my 3080Ti would be like blink..blink..blink..blink..blink..blink..blink..blink..finished. The 4090 it's like "wut work load?" and smashes through AI workloads in at least four orders of magnitude more rapidly. And it doesn't get hot at all because I don't have temper tantrums about fan noise. The most torturous freely available RT loop I found was the Bright Memory benchmark torture loop. After a half hour of sucking 600 watts plus overhead (UPS reported total system draw of almost 900 watts) the card never went over 70C, and that's for two simple reasons. 1: replaced sissy "quiet" fans with noctua industrial iPPC 3k 120mm fans (feel the hurricane) and manually ran cooling full tilt. 2: have one of those backplate riders for cooling the stray thermals from the components under the plate. 3: mounted the radiator OUTSIDE and to the side of the front of the frame (not even a true case anymore, more a bare metal frame with no panels or any other air obstructions) so no sucking recycled crammed in closed air.