News Intel Core i9-14900K, i7-14700K and i5-14600K Review: Raptor Lake Refresh XXX

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Titan
Ambassador
We'll need Meteor Lake or Arrow Lake to hopefully improve those aspects. Those future CPUs could actually be faster and more efficient than RPL-R, rather than just incrementally faster... or maybe just slightly faster but half the power? I guess we'll see when those launch.
From what I can find, Intel has only posted efficiency projections for Intel 4 and Intel 3, but there's no path to 2x in sight.

Intel%20Accelerated%20Briefings%20FINAL-page-006.jpg

Source: https://www.anandtech.com/show/1744...il-2x-density-scaling-20-improved-performance
Based on that, we can work out that the same design @ ISO-performance would use 70% of the power on Intel 3 vs. Intel 7. Although that sounds like we're getting close to your "half-power" figure, Intel 20A would need a 41% perf/W jump to get there. And don't forget, that's @ ISO-performance. If you want more performance, some of those power-savings will go out the window!

However, a leak claims Arrow Lake will reduce PL2 to 177 W. So, you should indeed see some improvement.


Finally, leaked benchmarks show performance projections putting Arrow Lake up 7% to 20%, with the median falling around 11% or so, relative to the i9-13900K.


In fact, if you look at the middle slide of that article, the single-threaded SPEC2017 tests show only a meager 3% to 8% improvement over the i9-13900K, within the same power limits. That suggests most of the improvement is in energy-efficiency: via some combination of improving throughput of the E-cores and improving efficiency of the P-cores.

What's worrying about those projections is that they supposedly used a PL2=253 W. So, maybe the actual multithreaded performance will be less, if the 177 W rumor is true.
 
Last edited:

bit_user

Titan
Ambassador
Since TomsHardware is supposed to be a tech site and not just a gamer site, I wish these reviews would be more detailed, like Techpowerup did in their review. They included machine learning, physics, chemistry and genome analysis, among other non-gaming programs, while TH relied mostly on just standard benchmarks like Cinebench.
Anandtech still runs SPEC2017, but they no longer compile the aggregate scores. So, you're just stuck looking at the sub-scores. I do like that they included the 2 prior-gen Intel flagships, so you can see the progression.

That test suite includes a broad range of real applications, from rendering to compiling, database, scientific, video compression, etc. - 22 different apps, in all. They're subdivided into fp-heavy and int-heavy.

I think the annoying part for reviewers is that you have to compile the test suite, yourself. However, this enabled Anandtech to run it on a variety of platforms, back when they still did server & phone reviews.
 

bit_user

Titan
Ambassador
The original Alder Lake was impressive in its time but it seems the 13th and 14th versions were mostly clock speed increases
Raptor Lake increased L2 cache sizes, doubled the E-cores, used a refined process node, and increased PL2. Those changes really did add up - especially on multithreaded benchmarks. Even on single-threaded performance, it was enough for them to take back the crown from Zen 4.

I sort of like refreshes, because I consider Raptor Lake to be the CPU that Alder Lake wanted to be. However, this latest refresh simply doesn't bring enough to the table to be very interesting.

The 14xxx series is just refined 13xxx and Intel did the new name because that's what they do for OEMs all the time. Everyone knew this is all it was before the launch yet some people feel the need to sensationalize it.
Oh, I guarantee that not everyone knew it! At work, the guy sitting next to me had never even heard of P-cores and E-cores, when we got new PCs, earlier this year. I happen to know this same guy built a PC about 5 years ago, and I know some of his (teenage & older) kids are gamers.

I'm sure a lot of people watch videos like GN as their only/main source of tech news & reviews. They won't have been following the lead-up to this launch like we have.
 
Last edited:

bit_user

Titan
Ambassador
I upgraded from an 8th gen I7 (not a bad chip but a little long in the tooth), and my monthly power bill dropped $25. It's literally paying for itself. 😺
Unless you have some crazy-expensive power, that's on the order of like 100 kWh! That's an average difference of about 135 W, over every hour, the entire month! What on earth are you doing with your PC?
 
Oh, I guarantee that not everyone knew it! At work, the guy sitting next to me had never even heard of P-cores and E-cores, when we got new PCs, earlier this year. I happen to know this same guy built a PC about 5 years ago, and I know some of his (teenage & older) kids are gamers.

I'm sure a lot of people watch videos like GN as their only/main source of tech news & reviews. They won't have been following the lead-up to this launch like we have.
I'm referring to those who are spewing the sensationalist nonsense, because they know better. By making the ridiculous statements they're doing a disservice to their audience in the name of views. There are far too many people regurgitating talking points like this because they don't understand the how and why.
 

bit_user

Titan
Ambassador
no premium motherboards on the market run them at their stock PL2.
If Toms' review shows what the typical DIYer is going to experience, then I'd have to agree with reviewing them that way. And in that case, the power is indeed pretty horrendous:

3tjH9FMPjXZRpQp6ToT346.png


CLh9Y6tVRNbw682Aqrwkq6.png


RqgkQ6La6M4ZUQuuYS58T7.png


CxknTG6pDF7PP7LBRvhTk6.png


TPU's review shows peak stock 14900K power consumption within 5W of the 13900K
The difference is actually 6 W, in Blender, which is 2.2% more power. At single-threaded MP3 Encoding, it used 6.1% more.

In their energy-efficiency tests, the i9-13900K does 7.8% better @ single-threaded. The rare win the Refresh got is that it beat the i9-13900K by 6.5% @ multi-threaded efficiency.


in the application tests they were 1W apart all while maintaining a slight performance bump.
In Toms' multithreaded benchmark suite, the Refresh only scored 1.2% better

RhzGV5mSV4zZNxTAcrW4bS.png

 
Last edited:

bit_user

Titan
Ambassador
I looked at the "conclusion" page from the above link.

Notice the "Peak Power" chart near the bottom.

The 14600K "draws" 69 fewer watts than the 13600K.

169 versus 238. A reduction of 29%.

Why is that?
That's a fair question. Assuming the data is accurate, it's worthy of an explanation. I haven't paid enough attention to that model to offer an explanation. However, I'd caution people to read much into that chart, because it's really more like a chart of PL4 behavior. Anandtech should really provide average power figures, to go along with the peak figures.

133939.png

It's weird that the article doesn't even seem to mention it.
 
Oh absolutely, but they need to help their OEM partners and the precedent is long set.
They are helping themselves first and foremost, how many sales do you think AMD is currently losing because people that want to buy AMD are not buying AMD because they are waiting for ZEN 5 to drop?
This makes sure that people that want to buy a new Intel have something new to buy.
Is it they they don't care, or is it they they encourage this, because it gives them they miniscule benefit over running within the limits?

This way, they can claim that extra performance "out of the box" when they want to compare themselves to AMD.
Very, extremely doubtful, the bad press they are getting from the high power draw is much worse than the 1-2% of more performance they are getting.
And in quite some cases they are even losing performance due to this because reviewers, either due to stupidness or malice, use really bad mobo/cooler combinations that make the CPU use up more power but also throttle and perform worse then it would at actual stock settings. Just look at hardware unboxed.
If Toms' review shows what the typical DIYer is going to experience, then I'd have to agree with reviewing them that way. And in that case, the power is indeed pretty horrendous:
But a review is supposed to show you the capabilities of the hardware and not the capabilities, or lack thereof, of users.
It's a hardware review not a user review.
If they want to show overclocking results as well then that is a welcome addition but it should be an addition and not the only result.

Also what typical DIY'er will ever run y-cruncher, blender, or prime, other than for a benchmark, or run h265 through the CPU when doing it through your GPU is many times faster.
 
  • Like
Reactions: thestryker

rambo919

Great
Sep 21, 2023
55
30
60
If Toms' review shows what the typical DIYer is going to experience, then I'd have to agree with reviewing them that way. And in that case, the power is indeed pretty horrendous:
3tjH9FMPjXZRpQp6ToT346.png
CLh9Y6tVRNbw682Aqrwkq6.png
RqgkQ6La6M4ZUQuuYS58T7.png
CxknTG6pDF7PP7LBRvhTk6.png
You know this right royally confuses me.... AMD HW generally have higher TDP's than Intel HW so you would expect them to draw more power.... but at the same time results like this exist that prove the exact opposite.

Though I suspect the general lack of non-K and non-X etc models does skew the results a bit.
 
If Toms' review shows what the typical DIYer is going to experience, then I'd have to agree with reviewing them that way. And in that case, the power is indeed pretty horrendous:
This isn't a review of the product then it's a review of the cooling system, power delivery, TIM application and binning of the CPU. By not limiting to the spec of the product being tested (remember Intel didn't make the default BIOS settings circumvent their product specs) it adds variables to the end results that simply don't need to be there. By not putting in a baseline it's impossible to tell what the comparative uplift of removing power limits is.
The difference is actually 6 W, in Blender, which is 2.2% more power. At single-threaded MP3 Encoding, it used 6.1% more.

In their energy-efficiency tests, the i9-13900K does 7.8% better @ single-threaded. The rare win the Refresh got is that it beat the i9-13900K by 6.5% @ multi-threaded efficiency.
6W, 5W it doesn't matter it's still margin of error when you're talking such a small percentage.

As for the testing it's pretty interesting and speaks volumes about run to run variance:
  • The multi-threaded efficiency test has the 14900K pulling ~3.5W less than the blender MT test and the 13900K pulling ~9.5W more.
  • The single-threaded efficiency test is that the 14900K uses the same amount of power with Cinebench 1t as MP3 encoding, but the 13900K uses ~1.5W less in the same comparison.
In Toms' multithreaded benchmark suite, the Refresh only scored 1.2% better
RhzGV5mSV4zZNxTAcrW4bS.png
As explained above these tests are borderline worthless without a baseline due to the additional variables.
 

bit_user

Titan
Ambassador
You know this right royally confuses me.... AMD HW generally have higher TDP's than Intel HW so you would expect them to draw more power.... but at the same time results like this exist that prove the exact opposite.
IIRC, there were a few cases where Ryzen 7950X did use more power. I was just highlighting the cases where where the i9-14900K really went overboard with its power consumption (i.e. as tested).

Though I suspect the general lack of non-K and non-X etc models does skew the results a bit.
That's a whole different ball game.

I would love for Toms to do a shootout between the non-K Intels and non-X Ryzens. Tagging @PaulAlcorn on that idea.
 

sitehostplus

Honorable
Jan 6, 2018
400
161
10,870
Unless you have some crazy-expensive power, that's on the order of like 100 kWh! That's an average difference of about 135 W, over every hour, the entire month! What on earth are you doing with your PC?
Well, it could be the overall cost of electricity declining, but what are the odds of that happening? 😺
 

sitehostplus

Honorable
Jan 6, 2018
400
161
10,870
Lol, so you spend like $1000 on a new system to save $25 per month, that will make you get even in 3.3 years......you know you could have just gotten into the bios and save $25 a month without spending a dime right?!

Also the 9th gen i9 drew like 95W at the max as shown by gamers nexus, the 8th gen i7 would have drawn even less and current ZENs draw more than that, tom's shows the 7950x3d drawing 121W in blender so God only knows how you are saving any money from your power bill at all.........
View: https://youtu.be/2MvvCr-thM8?t=354
Hey, it's more fun doing it this way. 😺
 

bit_user

Titan
Ambassador
This isn't a review of the product then it's a review of the cooling system, power delivery, TIM application and binning of the CPU. By not limiting to the spec of the product being tested (remember Intel didn't make the default BIOS settings circumvent their product specs) it adds variables to the end results that simply don't need to be there.
I do understand that! However, I also understand that Paul just installed the new CPUs the same way your typical DIY'er would. That makes the review valid, IMO.

By not putting in a baseline it's impossible to tell what the comparative uplift of removing power limits is.
It's important to test these products as they function out-of-the-box. As a supplemental test, it would be highly informative to test them as per the official Intel spec.

I agree with @King_V that Intel is basically trying to have it both ways. By letting their board partners use non-standard specs, reviewers get the best performance benchmarks. However, because it's non-standard, Intel can claim the power figures are invalid (without mentioning the performance is also invalidated). It's a clever tactic, but I don't think it works. All people see are the countless reviews showing the out-of-the-box behavior - both in terms of performance, but also heat & power.

As for the testing it's pretty interesting and speaks volumes about run to run variance:
It's likely due to variance, but we can't say for sure. You raise a good point that they probably didn't run these tests more than once per CPU, in order to actually measure the variance.

Heh, I keep thinking of what I'd do to test such products... establishing the variability of a test + platform combination is one of those essential things. We ought to be told the significance of the measurements! Another thing is to buy your own review samples, which Toms had a policy of doing... one upon a time. I'm virtually certain that's no longer done.
: (

Ideally, they would not only buy their own review samples, but then trade those samples with other like-minded reviewers, in order to get data on more than n=1. I fully expect many manufacturers employ some degree of "cherry-picking", when feeding samples to reviewers.
 
  • Like
Reactions: helper800

bit_user

Titan
Ambassador
Well, it could be the overall cost of electricity declining, but what are the odds of that happening? 😺
Seriously, since you're the bill payer, I'd encourage you to look at your actual month-to-month difference in kWh and then divide that by the hours of PC usage you do. There could be other factors at play. I think you'll probably find it can't be entirely due to that PC upgrade. Perhaps not even the majority of it.

My electricity bill gives me a 13-month history of my power usage, so I can compare against the same month of the previous year. That's a better comparison than month-to-month, unless you happen to live somewhere with extremely little temperature variation.

BTW, I checked and I'm paying $0.288 per kWh. The average month has 730.5 hours. So, a device which I leave on 24/7 should use $0.21 per Watt. For me to see a $25 monthly change, that works out to 119 W. For a machine that's on only 8 hours/day (7 days/week), the difference would be 356.5 W.

Also, if we consider there are 8765.8 hours/year, then I'm paying $2.52/year for every 1 W a device uses that's plugged in 24/7. I consider that, when buying things like cable modems and wifi routers. Since my cable modem uses about 6 W, that's about $15.15/year. If I keep for 5 years, that's a cumulative cost of $75.74, which IIRC was more than half of the up-front purchase price for it. Also, that's without taking into account the impact on heating/cooling costs.
 
Last edited:
Ideally, they would not only buy their own review samples, but then trade those samples with other like-minded reviewers, in order to get data on more than n=1. I fully expect many manufacturers employ some degree of "cherry-picking", when feeding samples to reviewers.
Like, perhaps giving them a super 14600k that uses so much less power you dont know whats going on with the test. Is it test variance? Is it a golden chip? is it really an intended reduction of power consumption on the chip? Who knows?
 
  • Like
Reactions: bit_user

King_V

Illustrious
Ambassador
Very, extremely doubtful, the bad press they are getting from the high power draw is much worse than the 1-2% of more performance they are getting.
And in quite some cases they are even losing performance due to this because reviewers, either due to stupidness or malice, use really bad mobo/cooler combinations that make the CPU use up more power but also throttle and perform worse then it would at actual stock settings. Just look at hardware unboxed.
Intel is very specifically NOT doing anything to stop the MB makers from this. Ergo, Intel does not want them to stop.

I don't think your statement here does anything to invalidate my assumption, which I admit is an assumption. But logically, what other reason could there be?

This is not malice on the part of MB makers or reviewers. This is in Intel's lap. WHY do they allow it? Because they get something from it.
 
Status
Not open for further replies.