News Intel Core i9-14900KS Review: The Last Core i9 Hits Record 6.2 GHz at Stock Settings

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
And in only about 12 months after that they will release yet another something...you have to pull the trigger at some point.
Also especially for gamers it doesn't even matter, if you buy a CPU with decent performance today it will last you for many years.
Exactly. An i7 14somethingwhatever, or an AMD Ryzen 7 Xsomethingelse. Anything more is just wasting money better spent on more GPU, or extra storage.
 
  • Like
Reactions: PEnns
If you match wattages Intel wins in most segments, and not by a small margin. R5 7600x vs i5 13600, 7700x vs 13700 etc. I mean Intel's i5 lineup even beats AMD's ryzen 7.

The only segment AMD has a small (literally single digit) win is in the high end, the 7950x is indeed 5-10% more efficient than the 14900k. It's really not that big, but people get confused cause reviewers are removing power limits and then yeah, obviously the CPU that will draw more power will be a lot less efficient.

TPU has done some testing with power limits, a 14900k @ 200 watts is in fact more efficient than a stock 7950x. At 125w it's more efficient than the 7950x 3d as well.

So MT and ST, intel is more efficient in general. It's only gaming that amd has the advantage with the 3d cpus, but that's also not as big as it shows. People don't play games with a 4090 at 1080p or 720p, and for sure they shouldn't buy a 14900k to play games, especially out of the box. i9 is terrible for gaming out of the box, since power draw is absurd. I've managed to drop power draw from 200 watts down to 95w with 0 performance loss.

This is TLOU, the game with by far the highest power draw out of any other. Never goes above 95 watts. Yes I dropped clocks and turned off HT, still with those settings it's faster in MT than a 7950x 3d for example.

Steve from GN came to very different conclusions from their testing.

View: https://m.youtube.com/watch?v=9WRF2bDl-u8
 
Something I would like to see is a review with Power limits on BOTH chips removed.
AMD and Intel.
With power limits removed, Set PBO to manual or board limits and my 3600@1.29 volts pulls 80 or so watts@ 4.4ghz all core. I reduced voltage to achieve this.
My 5600x with limits removed will do 4.5-4.65GHZ all core boost depending on benchmarks @ 85-89 watts mostly with peaks to 93 watts.
Multi threaded is similar to 5800x or between 12500-12600. With single threaded between 5800x 11700 territory.
With 6 core chips set free they will fly.
I'm sure my chips are average. The 5600x is unstable on 2 cores over default 4.65 on my Asus board.
 
These comparisons seem incorrect in terms of pricing, just throwing that out there. the 7600x is $215 to the 13600K at $297. The 7700x is $287 vs the 13700K at $369.

Obviously there's other costs incurred, and I'm not arguing what's better/worse here. Just that the distinction should be made.
Sure, im not in disagreement, amd dropped the prices, but it's obvious that the R5 7600x is a competitor to the i5 13600k. That's the whole point amd copied intel's naming scheme. If tomorrow amd drops the 7950x's price to 300$ doesn't mean that "Amd has mor efficient cpus" just because it's more efficient than the 13600k that also costs 300$.

Still, even adjusted for current prices, the 7700x stands no chance in efficiency against the 13600k at iso wattage, neither does the x3d against the 13700k / 14700k.
 
  • Like
Reactions: rtoaht
Steve from GN came to very different conclusions from their testing.

View: https://m.youtube.com/watch?v=9WRF2bDl-u8
No he didn't. He only tested a single MT benchmark at iso wattage, blender (which btw works absurdly well on amd) and even then, with that heavily biased workload the 14700k was both faster and more efficient than the 7800x 3d. Heck, it topped the entire chart in efficiency only losing to the 4.999$ threadripper, lol. If people see this video and still don't realize how stupidly more efficient intel is in MT workloads I don't know what else can convince them.

If he actually tested corona cinebench handbrake etc the gap would be absurdly huge. I mean going by CBR23 at 90watts the 14700k should achieve a score of 24-25k. The x3d can barely crack 18k.
 
  • Like
Reactions: cyrusfox and rtoaht
Sure, im not in disagreement, amd dropped the prices, but it's obvious that the R5 7600x is a competitor to the i5 13600k. That's the whole point amd copied intel's naming scheme. If tomorrow amd drops the 7950x's price to 300$ doesn't mean that "Amd has mor efficient cpus" just because it's more efficient than the 13600k that also costs 300$.
Again, I wasn't arguing which is better, just that by value of cost the competition is different. AMD reduced prices because they understood that the 7700x didn't compete with the 13700k, and the 7600x didn't compete with the 13600k. They may have initially expected they would, but I'd say it's clear they don't think those processors go head-to-head, which I think is the more valid position to take (and in fairness they released before Intel by a month). The what it "should" and what it "does" in competition should always be dictated by cost, not naming convention.
 
  • Like
Reactions: bit_user
Sounds like you did about everything you could to make this sound appealing to enthusiasts, when it really is just the fourth release of the same part: the i9 13900K. Literally exactly the same silicon except 400MHz higher boost clock and 200MHz higher base clock. But hey: let's compare using ridiculous speed RAM to AMD processors that don't have OC RAM so we can make it look like it trades blows. Why not add in the Intel Xeon Platinum 8558U for comparison? I'm sure it blows AMD's 7800X3D out of the water at multithreading.

As noted in the article, using an EXPO RAM kit for the 7000X3D chips gives you less than a 1% performance improvement -- if anything, that is great, as you can use dirt cheap RAM and get great performance. There is no need to test them with faster RAM.

Yes, faster RAM is an option that improves things for the KS, and that was tested separately from the stock settings and explicitly called out as an even more expensive solution.
 
  • Like
Reactions: -Fran- and rtoaht
At no point in this review do you say what the power profile is (implication is limits removed), and it's pretty well known that unlimited power causes voltage spikes. Assuming you're running unlimited you're doing a disservice to readership by leaving it at that especially seeing as it's been identified as causing stability issues with certain games. Sure it's on Intel to actually enforce a rational baseline, but even in the review guide they cited the performance and extreme power profiles which nobody seems to be adhering to or even adding as a secondary testing point.
 
As noted in the article, using an EXPO RAM kit for the 7000X3D chips gives you less than a 1% performance improvement -- if anything, that is great, as you can use dirt cheap RAM and get great performance. There is no need to test them with faster RAM.
Funny how this contradicts your review of the 8600G, where everything related to pricing was a con.

By the way, where's the score for this one?
 
Intel just flat out sux now.
Also it seems like the limits of the x6 architecture's performance are getting close to the limit.
Besides the hundreds of watts it consumes to run this fast the heat output is getting out of control.
We aren't at the limits of x86 yet.

It will continue to improve from node shrinks (Arrow Lake will use TSMC N3 instead of Intel 7 for Raptor Lake, a big improvement). Intel hasn't started using L4 Adamantine cache yet, or any 2.5D/3D architecture on desktop. They have plans to simplify the x86 ISA with X86S and bring back AVX-512 support with AVX10. They are getting rid of hyperthreading and replacing it with "rentable units".

On the AMD side, they have yet to bring "C" cores to desktop even though those could clearly boost efficiency and performance-per-area. Zen 6 will probably break with the design used from Zen 2 to Zen 5 and use far superior packaging and interconnects.

Some of these changes could have been made earlier, but what we have works fine. Zen 2/3/4 desktop chips use a cheap and effective design that allows AMD to use the same chiplets in Epyc server chips. The ultimate desktop chips of the 2030s/2040s will probably be some monstrous 3D mega APUs that can be made of chiplets but perform like monolithic designs.

Out of the box power consumption of a 14900KS can be comically bad, but the same chips can also act as a 14900T with most of the performance and far lower power consumption. That's probably not what the buyers want though, and there won't be many of them since it's a limited edition chip.
 
I like my i5 just fine. Intel can wave whatever flag they want. This KS thing is exactly as remarkable as an air scoop on a 87 Sierra. It's a thing that exists that a few people know about, and can have a meaningful, in-depth argument about. And it has no bearing on anything practical either.

As far as i'm concerned, the i9 K or even the i7 are the flagship CPUs. They're the ones people actually buy, and the ones with performance worth paying for. The i5 is worth the money too, but i would not call that a flagship.

monitor resolution, monitor response, game bugs, windows bugs, drivers maybe, internet connection. The GPU itself. Comfortable chair. Some things i can think of that make a bigger difference when it comes to gameplay. Things worth spending money on (except bugs/drivers of course).

I'm not even sure why intel bothered with this? It's not turning heads for the right reasons, and it's not like intel has market share to claw back. That plus the LGA 1700 chips are done and done. What exactly does this add?
An exclamation point at the end of the story. That no one asked for or needed.
 
If you match wattages Intel wins in most segments, and not by a small margin. R5 7600x vs i5 13600, 7700x vs 13700 etc. I mean Intel's i5 lineup even beats AMD's ryzen 7.

The only segment AMD has a small (literally single digit) win is in the high end, the 7950x is indeed 5-10% more efficient than the 14900k. It's really not that big, but people get confused cause reviewers are removing power limits and then yeah, obviously the CPU that will draw more power will be a lot less efficient.

TPU has done some testing with power limits, a 14900k @ 200 watts is in fact more efficient than a stock 7950x. At 125w it's more efficient than the 7950x 3d as well.

So MT and ST, intel is more efficient in general. It's only gaming that amd has the advantage with the 3d cpus, but that's also not as big as it shows. People don't play games with a 4090 at 1080p or 720p, and for sure they shouldn't buy a 14900k to play games, especially out of the box. i9 is terrible for gaming out of the box, since power draw is absurd. I've managed to drop power draw from 200 watts down to 95w with 0 performance loss.

This is TLOU, the game with by far the highest power draw out of any other. Never goes above 95 watts. Yes I dropped clocks and turned off HT, still with those settings it's faster in MT than a 7950x 3d for example.

I don't think you understand comparing apples to apples.

you just went into a long dissertation about how if you undervolted your i9, and turn off hyperthreading and disable your e cores and who knows what else you're doing you can clamp the power draw back down to 95W which then makes it more efficient frame to frame then an r9-7950x3d. which is stock out of the box.

and none of the crazy shit you just said in your oranges to apples comparison is born out by any testing done by any major reputable testing group. we just gotta "believe you bro" on your power draw numbers which i somehow doubt are legit considering what reviewers need to do to get accurate power numbers.

Listen, your post sounds identical to the AMD fanboys back in piledriver trying to convince everyone their FX 8320 was as good a purchase as an i5-3570k; that the power numbers weren't legit or that they needed those 8 cores for streaming or something. it's cool i get it. I owned a fx8320, I got huge overclocks on low power numbers out of it (golden chip), but I was never under the impression my chip was better then a i5-3570k, nor did i try to convince others of it. nor did i make a "believe me bro" post like this one...

reign it back a bit. there are reasons to go intel today over AMD. you don't have to do weird apples to oranges strange testing to prove it in that one game.
 
They can't. Going from 400 watts down to 200 watts reduces performance by like 5%. Just because it's pushed to go as fast as possible doesn't mean that amd can do the same at a fraction of the power. It can't.
Scaling of performance vs. power depends on a lot of factors. So, it probably shouldn't come as a surprise that Raptor Lake and Zen 4 scale somewhat differently.

cj1qY3F.png


The 7950x in cbr24 already needs 65% more power than the 7950x 3d for 5% more performance. The 14900ks is 15% faster than the 7950x. Doing some simple arithmetic in order for the 7950x to hit the same CBR24 score as the 14900ks it would need - I don't know - gazilions of power.
A big problem with that approach is that you can't just do simple extrapolations, because those curves aren't straight lines. Furthermore, not all performance problems are easily solvable by throwing more power at them.
 
  • Like
Reactions: atomicWAR
If you match wattages Intel wins in most segments, and not by a small margin. R5 7600x vs i5 13600, 7700x vs 13700 etc. I mean Intel's i5 lineup even beats AMD's ryzen 7.
Wattages and product segments are two different things. AMD sells a R9 7900 that runs at 65 W (88 W PPT). You can clearly see from my above post that its performance even comes very close to the i9-13900K, when both are restricted to those power levels!

TPU has done some testing with power limits, a 14900k @ 200 watts is in fact more efficient than a stock 7950x. At 125w it's more efficient than the 7950x 3d as well.
It depends a lot on the workload. TechPowerUp's reviews typically feature one single-threaded and one multi-threaded benchmark, for measuring efficiency. That gives a very narrow and potentially misleading impression. If we could really draw such sweeping conclusions from one or two benchmarks, then hardware reviewers' jobs sure would be a lot easier!

So MT and ST, intel is more efficient in general.
So, here's the article where I got the data for the above plot:


I'd encourage you to click around in their charts and review the data they provide for the different CPUs and workloads. My plot only covers their multithreaded aggregate score.
 
  • Like
Reactions: atomicWAR
If you really want to go Intel I'm not sure why you wouldn't just sit tight at this moment. Arrow Lake coming down the pipe in 6 to 9 months. Might as well see what that has to offer before pulling the gun.
I think closer to 6 months, at this point.

And in only about 12 months after that they will release yet another something...you have to pull the trigger at some point.
Also especially for gamers it doesn't even matter, if you buy a CPU with decent performance today it will last you for many years.
That would be at least 18 months, which is 3x as long.

It's generally hard to argue with the advice to just wait until you need something newer/faster, then just buy the best thing in your budget at the time. However, you can typically do a little better by waiting for a new product launch, if one is just around the corner.

Furthermore, the difference between Raptor Lake and Arrow Lake is supposedly like 2 major nodes' worth of process improvement, plus some actual microarchitecture changes. There's every reason to believe that Arrow Lake will give more significant performance improvements over Gen 14 than it did over Gen 13. I think James' advice is pretty sound.
 
On the AMD side, they have yet to bring "C" cores to desktop even though those could clearly boost efficiency and performance-per-area.
Oh, but they did! The Ryzen 8500G and 8300G are their hybrid laptop SoCs in an AM5 socket.

Be warned that the early reviews of these suffered from a weird thermal throttling bug.

Be sure to look for reviews with this fix applied.
 
  • Like
Reactions: usertests
A big problem with that approach is that you can't just do simple extrapolations, because those curves aren't straight lines. Furthermore, not all performance problems are easily solvable by throwing more power at them.
Correct, but making it a straight line gives an advantage to AMD. You can clearly see from your plot that the 7950x basically flatlined already at 220watts. It's pretty clear that in order to match the 14900k / ks performance it would draw a LOT of power. As much as the ks? Have no clue, but it's not outlandish.
 
It will continue to improve from node shrinks (Arrow Lake will use TSMC N3 instead of Intel 7 for Raptor Lake, a big improvement).
No they won't, for the simple fact that TSMC wouldn't be able, or willing, to supply them with enough chips.
It depends a lot on the workload. TechPowerUp's reviews typically feature one single-threaded and one multi-threaded benchmark, for measuring efficiency. That gives a very narrow and potentially misleading impression. If we could really draw such sweeping conclusions from one or two benchmarks, then hardware reviewers' jobs sure would be a lot easier!
They also typically have a chart with an 47 app average (47 not one or two, far more than anybody else) , and in the one he is talking about they show the average power draw which is 170W on a stock 1490k and 128W on a stock 7950x.
With the 14900k at 200W it draws 141W compared to the 128W of the 7950x and comes slightly ahead in performance with 97.6% vs 95.1%.
AMD is still more efficient but the margin is so small that it's laughable.
You do have to overjuice the intel CPU to hell to make it look bad.
D5TipA9.jpg

Furthermore, the difference between Raptor Lake and Arrow Lake is supposedly like 2 major nodes' worth of process improvement, plus some actual microarchitecture changes. There's every reason to believe that Arrow Lake will give more significant performance improvements over Gen 14 than it did over Gen 13. I think James' advice is pretty sound.
That is going to make the newer CPUs faster, maybe, but it's not going to make older CPUs any slower, at least for many years.
Also delays are always an option, you don't know if intel will even release them at all, let alone this year.
Sure if somebody already has a good system then there is no reason to buy a PC right now and they can wait for whatever might be coming.
 
At no point in this review do you say what the power profile is (implication is limits removed), and it's pretty well known that unlimited power causes voltage spikes. Assuming you're running unlimited you're doing a disservice to readership by leaving it at that especially seeing as it's been identified as causing stability issues with certain games. Sure it's on Intel to actually enforce a rational baseline, but even in the review guide they cited the performance and extreme power profiles which nobody seems to be adhering to or even adding as a secondary testing point.
That is incorrect. The power settings used are listed in the article in the first testing section (games benchmarks).
 
  • Like
Reactions: -Fran- and bit_user
Funny how this contradicts your review of the 8600G, where everything related to pricing was a con.

By the way, where's the score for this one?
We did this article as a single-pager due to the time constraints (not enough up-front content to really support a first page, etc), and our single-page format doesn't support a score/ pro-con verdict box. I would give it a three. It is also clearly stated that there is no value proposition and that it is a niche product.

The expensive memory used for our XMP testing is noted as being expensive and is definitely not mentioned as a "pro" for the chip. Folks spending $700 on the chip and all of the other money for supporting components are probably going to buy expensive RAM anyway.

Regardless, the target markets for the 8000G couldn't possibly be more different — these are complete opposite ends of the gaming PC spectrum. With the 8600G, we're talking about the lowest-end budget gaming systems that don't even use a GPU—there is a complete and total price sensitivity in that market that simply cannot tolerate expensive RAM.
 
Last edited:
Most people would throw more juice at their processor for performance, if it could handle it. Overclockers are notorious for employing extremes for moderate gains. Is there really anything to complain about here, it’s just a niche product that Intel “could” make happen. We are already being warned that processors in the future will require more electricity and better cooling solution with what appears to be little regard for energy savings. Unfortunately, we aren’t engineering architectures to run faster with less. I think the AI bubble will certainly prevent this in the short term.
 
  • Like
Reactions: TheHerald
Status
Not open for further replies.