News Raptor Lake Refresh CPUs Reportedly Launch In October

From what little I've read about Raptor Lake, it didn't address some of the main weak spots in Alder Lake. For instance:

I hope (but I'm not optimistic), that the Refresh is more than a mere clock-boosting exercise, and that they actually tackled some of the bottlenecks not addressed in Raptor Lake.
 
I just hope Intel can bring the support of Digital Linear Voltage Regulator (DLVR), IF need be, which was a power delivery mechanism meant to be featured in the 13th Gen Intel's Raptor Lake CPUs. But was never added I suppose.

Will it be worth adding to the REFRESH family though? According to the Intel's patent application that revealed these plans, it could save as much as 25% power.

The process was to be less complex while allowing for cost-efficiency in controlling power levels and better heat dissipation. The outcome would produce an increase of twenty percent in improved efficiency and management.

They claim D-LVR "drastically increases the CPU performance" at just a small added cost for the silicon and with some easy tuning. This is in part because by placing a D-LVR in parallel to a primary VR, processor cores draw less power and the effective voltage a CPU operates at is thereby lowered.

FWIW, ASUS actually had this option on it's ROG Z790/Z690 motherboard series, only because they anticipated that future CPUs might utilize DLVR. The BIOS option was labeled as "CPU DLVR Bypass Mode Enable", but did not seem to bring any improvements though, lol.

Two other board vendors also told that DLVR was indeed on its way to the desktop segment, before being cut down/fused early in the development of Raptor Lake series.

But I presume this tech would be more beneficial for MOBILE chips instead, and that's where we might see it launch for the first time before coming to the desktop space, maybe MTL mobile series ?

BmtPJzC.jpg



Post edited for several typo errors !
 
Last edited by a moderator:
  • Like
Reactions: Why_Me and bit_user
Apart from that, I don't expect "miracles" from this so-called RPL-Refresh family, since the architecture remains the same. Guesstimate.
  • Same Arc as Raptor Lake (Raptor Cove P-Cores + Gracemont E-Cores)
  • Same Process Node as Raptor Lake (Intel 7 aka 10nm++)
  • Higher Clock Speeds Beyond 6.0/6.2+ GHz maybe ? Power hungry chips cometh soon ?
  • Support For Faster DDR5 Memory DIMMs
  • Much much higher Power Consumption (up to min 300-350W) or maybe more ?
  • Compatibility with Existing LGA 1700 / 1800 Socket Motherboards, which is obvious.
 
  • Like
Reactions: bit_user
I'm rocking the 12700T here...
I will wait a new 14500T FAST edition

Low power cpu with nvidia 128 bits edition will be great! (ps wii U need 34w to work. that 12700T use 32w to play at 30 fps 1080P HD770)
 
I just hope Intel can bring the support of Digital Linear Voltage Regulator (DLVR), IF need be, which was a power delivery mechanism meant to be featured in the 13th Gen Intel's Raptor Lake CPUs. But was never added I suppose.

Will it be worth adding to the REFRESH family though? According to the Intel's patent application that revealed these plans, it could save as much as 25% power.

The process was to be less complex while allowing for cost-efficiency in controlling power levels and better heat dissipation. The outcome would produce an increase of twenty percent in improved efficiency and management.
They will first milk every bit of power usage the industry will stand out of their CPUs before implementing that so it will make the biggest impact.
Apart from that, I don't expect "miracles" from this so-called RPL-Refresh family, since the architecture remains the same. Guesstimate.
  • Same Arc as Raptor Lake (Raptor Cove P-Cores + Gracemont E-Cores)
  • Same Process Node as Raptor Lake (Intel 7 aka 10nm++)
  • Higher Clock Speeds Beyond 6.0/6.2+ GHz maybe ? Power hungry chips cometh soon ?
  • Support For Faster DDR5 Memory DIMMs
  • Much much higher Power Consumption (up to min 300-350W) or maybe more ?
  • Compatibility with Existing LGA 1700 / 1800 Socket Motherboards, which is obvious.
Why are you calling it so-called??
That's what every refresh has been in the last years, better binns due to better yields resulting in better clocks and not much anything more.
The 13900, k and ks, already use 300-350W depending on how much cooling you use and on the default settings of the mobo. If you overclock you can get to more than 400W on good "normal" (non-exotic) cooling for both the k and ks.
Unless reviewers start to use the warranted max turbo power as a limit every CPU will just reach whatever limit the cooling of the system represents.
 
From what little I've read about Raptor Lake, it didn't address some of the main weak spots in Alder Lake. For instance:

I hope (but I'm not optimistic), that the Refresh is more than a mere clock-boosting exercise, and that they actually tackled some of the bottlenecks not addressed in Raptor Lake.
I'm at work all day so I can't post a hwinfo screenshot until tonight, but i' m seeing my ring run at the bios auto 5.0ghz with e-cores active or not on my 13900kf. P to e core latency has also dropped to the p to p core range.
Both the p and e cores for RPL run significantly faster, or at the same speed with less power.
I also have a 13600k in a living room itx and all of the p cores(in i5 and i9) can do 5.5ghz and all of the e cores can do 4.4ghz. At the same time. At stock volts. My 12700k could barely do 5.0/3.8 all core with extra volts. RPL was a big step up all around, but I don't think it is reasonable to expect the same out of the refresh.

Maybe +200 mhz sounds more reasonable.
 
Why are you calling it so-called??
That's what every refresh has been in the last years, better binns due to better yields resulting in better clocks and not much anything more.
The 13900, k and ks, already use 300-350W depending on how much cooling you use and on the default settings of the mobo. If you overclock you can get to more than 400W on good "normal" (non-exotic) cooling for both the k and ks.
Unless reviewers start to use the warranted max turbo power as a limit every CPU will just reach whatever limit the cooling of the system represents.

Yes, I know that.

Don't get me wrong though, since I'm NOT saying any refresh CPU is a bad release, be it AMD or INTEL, but at what cost we would be getting a slightly higher performance ? And is the gain in performance, really worth the extra price and higher clocks/TDP ?

Don't tell these refresh SKUs will have the exact same power consumption. I mean, they can be considered as having "normal" operating temps, but nowhere still close to the original RPL chips, imo. I have not read the K and KS chips reviews though. Will do it in a jiffy.

What I'm trying to say is there are no "major" architectural changes done in any CPU refresh, apart from Intel using some better binned chips. Unlike the Meteor Lake series which would be featuring a full architecture overhaul.

PS:

Since I'm not getting ANY notifications here on Forums, as well as via Email, I'm now revisiting all the previous threads in which I posted replies, to check the update, lol. Kind of frustrating though.

I will post more details later on.
 
but at what cost we would be getting a slightly higher performance ? And is the gain in performance, really worth the extra price and higher clocks/TDP ?
Again the cost of full blown overclocking is never a good bargain and sadly that's the only thing people look at in benchmarks.
Raptor lake had a 40% increase in efficiency at the same power and nobody reported on that, or it got completely lost in the headlines of 'OMG molten lava'
Intel's claims:
Similar performance at ~25% of the power.
2-640.b27e5d6b.png

Actual numbers:
The 13900k is at 117% at 65W and the 12900k is at 121% at 241W.

Multi-Core-Leistung je TDP-Klasse, normiert auf Core i7-13700K 65 Watt
CPU (Kerne)45 Watt (69 %)65 Watt (100 %)88 Watt (135 %)125 Watt (192 %)142 Watt (218 %)181 Watt (278 %)230 Watt (354 %)241/253 Watt (385 %)Unlimited (–)
Core i9-13900K (8+16)92 %117 %135 %153 %159 %184 %186 %
Core i7-13700K (8+8)80 %100 %114 %128 %133 %145 %
Core i5-13600K (6+8)79 %94 %105 %114 %115 %117 %
Core i9-12900K (8+8)86 %98 %113 %121 %125 %
Ryzen 9 7950X (16)84 %130 %154 %180 %189 %
Ryzen 9 7900X (12)83 %114 %131 %146 %149 %
Ryzen 7 7700X (8)82 %98 %104 %108 %
Ryzen 5 7600X (6)69 %79 %83 %83 %
 
Intel's claims:
Similar performance at ~25% of the power.
2-640.b27e5d6b.png

Actual numbers:
The 13900k is at 117% at 65W and the 12900k is at 121% at 241W.
A lot of that is from doubling the number of e-cores, and yet they get so much hate.

Much of the rest of its additional multi-threaded performance is from beefing up the caches and increasing DDR5 speeds. Trying to feed so many threads with 128-bit DDR5 is obviously going to bottleneck.
 
A lot of that is from doubling the number of e-cores,
I would love to see newer and just more numbers but with the little data we have we have to say that that is despite doubling the number of e-cores since the e-cores are about 20-30% less energy efficient in multithreaded workloads, at least in cinebench.
efficiency-multithread.png
 
I would love to see newer and just more numbers but with the little data we have we have to say that that is despite doubling the number of e-cores since the e-cores are about 20-30% less energy efficient in multithreaded workloads, at least in cinebench.
efficiency-multithread.png
That's measuring system power, over the duration of a fixed workload. So, it makes some sense why the P-cores have an advantage. The sooner you complete the task, the sooner you stop measuring energy usage by not just the cores, but also the rest of the system. Doesn't negate what I said.
 
Yes, that's what efficiency means.
Not really, because "system power" includes things like RAM, dGPU, SSD, etc. If you simply add a dGPU with high idle power, like an A770, it will bias the results towards the faster CPU or cores, without actually having anything to do with the CPU or workload. Same for using 4 DIMMs instead of 2, using a more power-hungry SSD, etc.

The best way to measure this is simply to look at package power, not system power.
 
Not really, because "system power" includes things like RAM, dGPU, SSD, etc. If you simply add a dGPU with high idle power, like an A770, it will bias the results towards the faster CPU or cores, without actually having anything to do with the CPU or workload. Same for using 4 DIMMs instead of 2, using a more power-hungry SSD, etc.

The best way to measure this is simply to look at package power, not system power.
These aren't benches from two different systems.
Everything is the same except for what cores they run the workload on.
The same system uses 20-30% more power when only the e-cores are running.
Since the only thing that changes is the cores all of the difference is because of the cores.
 
These aren't benches from two different systems.
Everything is the same except for what cores they run the workload on.
Yes, I understand that. Depending on how much power the rest of the system is burning, you can bias such benchmarks more in favor of the E-cores or P-cores.

The same system uses 20-30% more power when only the e-cores are running.
No, that's misleading. What you mean is that when using only the E-cores, the energy used by the system during their fixed workload is that much greater. That's different than saying the system uses more energy when only the E-cores are running - it certainly doesn't.
 
  • Like
Reactions: Fulgurant
Yes, I understand that. Depending on how much power the rest of the system is burning, you can bias such benchmarks more in favor of the E-cores or P-cores.
How do you change the amount of power the rest of the system is burning to create bias if you don't change anything in the system?
Again it's not different systems....
The rest of the system will use the same power, the only difference is the power that the cores draw and for how long.
We are not arguing about e-cores in general here, like e-cores on a Pi equivalent or something.
No, that's misleading. What you mean is that when using only the E-cores, the energy used by the system during their fixed workload is that much greater. That's different than saying the system uses more energy when only the E-cores are running - it certainly doesn't.
So the energy used by the system during their fixed workload is that much greater but that is not because the system, the e-cores basically, are using more power, due to needing more time, to finish that fixed workload?

The point is the same.
Either: The e-cores make the rest of the system run for longer using up more power overall at the end.
Or: The efficiency of the e-cores is worse than the efficiency of the p-cores.

Either way the e-cores cause more power to be used to finish a fixed workload, at least in a full system that an average person would built for an xx900k system, so it's still despite and not because.
 
How do you change the amount of power the rest of the system is burning to create bias if you don't change anything in the system?
Again it's not different systems....
The way it works is if your system idle power is 100 W, then a situation where your CPU adds an additional 10 W will take 110 W for the duration of the test. If the CPU uses 120 W, then your system is using 220 W for the duration of the test. All the higher-power config has to do is complete the test is half the time or less, and the total Wh used will be less than the lower-power run.

On the other hand, if your system idles at 50 W, then the two runs will average 60 W and 170 W, respectively. In this case, the higher-power option needs to be at least 2.83x as fast, to break even with the lower-power option.

So, the use of system power makes this test very sensitive to the system configuration and its idle power burn. A higher-spec system (better GPU, more RAM, faster SSD) will make the P-cores seem more efficient than they are, because they "turn off the taps" of that high idle power burn sooner.

Either: The e-cores make the rest of the system run for longer using up more power overall at the end.
Or: The efficiency of the e-cores is worse than the efficiency of the p-cores.
The test is artificial, because the real world isn't using just the E-cores. The way people use them is to complement the P-cores and make the total workload time even shorter. That's why the run with all 16 cores used the least energy.