Review Intel Core i9-13900KS Review: The World's First 6 GHz 320W CPU

Less 'overclocking' and more on 'underclocking' articles please.

That would be helpful for the ever growing segment who does NOT need the testosterone rush of having the 'fastest' ... and wants more info on the logical underclocking to...well, do things like get the most out of a CPU without the burden of water-cooling, it's maintenance and chance of screwing up their expensive systems.

These CPU's and new systems would be flying off the shelves much faster than they are if only people did not have to take such measures for all the heat they generate. It's practically gone from being able to 'fry an egg' on a CPU to 'roasting a pig'. 🙁
 
Seems like the article got a new comment thread, somehow. The original thread was:



I'm guessing because it had previously been classified as a News article and is now tagged as a Review.
 
Last edited:
Thanks for the thorough review, @PaulAlcorn !

Some of the benchmarks are so oddly lopsided in Intel's favor that I think it'd be interesting to run them in a VM and trap the CPUID instruction. Then, have it mis-report the CPU as a Genuine Intel of some Skylake-X vintage (because it also had AVX-512) and see if you get better performance than the default behavior.

For the benchmarks that favor AMD, you could try disabling AVX-512, to see if that's why.

Whatever the reason, it would be really interesting to know why some benchmarks so heavily favor one CPU family or another. I'd bet AMD and Intel are both doing this sort of competitive analysis, in their respective labs.
 
  • Like
Reactions: Avro Arrow
Thanks for the thorough review, @PaulAlcorn !

Some of the benchmarks are so oddly lopsided in Intel's favor that I think it'd be interesting to run them in a VM and trap the CPUID instruction. Then, have it mis-report the CPU as a Genuine Intel of some Skylake-X vintage (because it also had AVX-512) and see if you get better performance than the default behavior.

For the benchmarks that favor AMD, you could try disabling AVX-512, to see if that's why.

Whatever the reason, it would be really interesting to know why some benchmarks so heavily favor one CPU family or another. I'd bet AMD and Intel are both doing this sort of competitive analysis, in their respective labs.
Well, we've seen the video card manufacturers code drivers to give inflated benchmark results in the past. Is it so outlandish to think Intel or AMD might make alterations in their microcode or architecture in favor of high benchmark scores vs being overall faster?
 
  • Like
Reactions: Avro Arrow
Is it so outlandish to think Intel or AMD might make alterations in their microcode or architecture in favor of high benchmark scores vs being overall faster?
Optimizing the microcode for specific benchmarks is risky, because you don't know that it won't blow up in your face with some other workload that becomes popular in the next year.

That said, I was wondering whether AMD tuned its branch predictor on things like 7-zip's decompression algorithm, or if it just happens to work especially well on it.

To be clear, what I'm most concerned about is that some software is rigged to work well on Intel CPUs (or AMD, though less likely). Intel has done this before, in some of their 1st party libraries (Math Kernel Library, IIRC). And yes, we've seen games use libraries that effectively do the same thing for GPUs (who can forget when Nvidia had a big lead in tessellation performance?).
 
  • Like
Reactions: Avro Arrow
Intel: "We need a faster chip"
eng 1: what if we make it hotter & uncontrollably force power into it?
eng 2: what if we try soemthign else that doesnt involve using guzzling power as answer?

intel: [tosses eng 2 out window] eng1 you're a genius!
 
  • Like
Reactions: Avro Arrow
Intel: "We need a faster chip"
eng 1: what if we make it hotter & uncontrollably force power into it?
eng 2: what if we try soemthign else that doesnt involve using guzzling power as answer?

intel: [tosses eng 2 out window] eng1 you're a genius!
Part of the problem might be in Intel's manufacturing node. That could limit the solution space for delivering competitive performance, especially when it also needs to be profitable. Recall that Intel 7 not EUV, while TSMC has been using EUV since N7.
 
  • Like
Reactions: Avro Arrow
Less 'overclocking' and more on 'underclocking' articles please.

That would be helpful for the ever growing segment who does NOT need the testosterone rush of having the 'fastest' ... and wants more info on the logical underclocking to...well, do things like get the most out of a CPU without the burden of water-cooling, it's maintenance and chance of screwing up their expensive systems.

These CPU's and new systems would be flying off the shelves much faster than they are if only people did not have to take such measures for all the heat they generate. It's practically gone from being able to 'fry an egg' on a CPU to 'roasting a pig'. 🙁
Intel has at least once in the past disabled the ability to undervolt. Look up the "plundervolt" vulnerability. Basically around 7th and 8th gen CPUs it was discovered that under very specific conditions that most users would never encounter, undervolting allowed some kind of exploit. The solution: push a windows update preventing CPU from being set below stock voltage. I have a kaby lake in a laptop that was undervolted a good 0.2v, knocked a good 10°C off temps. One day it started running hotter and surprise! I can still overvolt it just fine though, I guess that's what matters for laptops. Essentially, as useful as undervolting can be, Intel doesn't see it as something worthwhile compared to "security."
 
  • Like
Reactions: Avro Arrow
Part of the problem might be in Intel's manufacturing node. That could limit the solution space for delivering competitive performance, especially when it also needs to be profitable. Recall that Intel 7 not EUV, while TSMC has been using EUV since N7.
Being able to withstand higher extremes is a sign of better manufacturing not worse.
Intel CPUs can take a huge amount of W and also of Vcore without blowing up, these are signs of quality.
TSMC getting better is how AMD was able to double the W in this generation.
You don't have to push it just because it is pushable.
View: https://www.youtube.com/watch?v=jv3uZ5Vlnng
 
Being able to withstand higher extremes is a sign of better manufacturing not worse.
LOL, you're missing the point. The problem is that they aren't competitive unless you're cranking a ton of W through them.
130507.png
See, if Intel stopped at 125 W, then they would've been beaten, even if AMD stopped at only 105 W. That's why they went all the way to 253 W.

That's the dirty secret. Raptor Lake is noncompetitive, unless you juice it hard.
 
That's the dirty secret. Raptor Lake is noncompetitive, unless you juice it hard.
You would need a lot more data points between 125W and 253W to proof that point.
Also a lot more and different apps.
Yes, if rendering is your main job ryzen is top dog, and nobody cares.

Intel at 35 W beats ryzen even at 230W ....
No, wait!
ZEN is noncompetitive, unless no matter even if you juice it hard.
It's true, but it's just as useless of an statement as what you did.
130506.png
 
Intel has at least once in the past disabled the ability to undervolt. Look up the "plundervolt" vulnerability. Basically around 7th and 8th gen CPUs it was discovered that under very specific conditions that most users would never encounter, undervolting allowed some kind of exploit. The solution: push a windows update preventing CPU from being set below stock voltage. I have a kaby lake in a laptop that was undervolted a good 0.2v, knocked a good 10°C off temps. One day it started running hotter and surprise! I can still overvolt it just fine though, I guess that's what matters for laptops. Essentially, as useful as undervolting can be, Intel doesn't see it as something worthwhile compared to "security."
In the current power draw and throttling situation, taking into account that the chips are good, it may be possible that we could get stable original clock with lower voltage. That would lead to a lower power draw, lower temperature and less thermal throttling. In turn, it may lead to a better performance. Hence, for many workloads, undervolting may reduce temperatures and power consumption and increase the performance... The plundervolt was due to causing instability in the processor, so setting the undervolting should require administrator's privileges but if the processor is fully stable, there is no vulnerability. Undervolting is also different than limiting the total power draw. To my knowledge, limiting the total power keeps the voltage curve unaltered, while undervolting affects the whole curve, hence for given power draw, we have more computing power.
 
I am happy with my CPU, it is only an 11400F but it absolutely destroys my old CPU as it gets over 10000 in cinebench and my old one barely hit 3800 (8350fx)... I actually found something crazy that appears to be a flaw with AMD cpus.... I can play the old version of Unreal tournament, Prince of persia, soldier of fortune now after not being able to play them on a AMD cpu for years. I know it is not video card related as I am running a 6700XT and even my old gtx 750 was unable to play these games...
 
Optimizing the microcode for specific benchmarks is risky, because you don't know that it won't blow up in your face with some other workload that becomes popular in the next year.
That's a fact. It's better to just let your product live or die on its own real merits. Both AMD and Intel make fine products so it's not like there's much of a risk.
That said, I was wondering whether AMD tuned its branch predictor on things like 7-zip's decompression algorithm, or if it just happens to work especially well on it.
I seriously doubt that because (IIRC) while AMD does beat Intel in archive decompression, Intel usually beats AMD in compression. Then there's the fact that archive compression/decompression isn't that big of a deal. If I were AMD or Intel and wanted to tune my CPU's performance to a specific benchmark, it would be CineBench, not 7-Zip. This is especially true since sure, while 7-Zip has become the de facto standard for archive compression these days, we're not too far removed from the days when WinRar was the test given by many reviewers.

To be clear, what I'm most concerned about is that some software is rigged to work well on Intel CPUs (or AMD, though less likely). Intel has done this before, in some of their 1st party libraries (Math Kernel Library, IIRC). And yes, we've seen games use libraries that effectively do the same thing for GPUs (who can forget when Nvidia had a big lead in tessellation performance?).
I agree with you 100% because I want truth and reality just like you do. Underhanded tactics by Intel (like their compiler) constitute the reason why I won't support them by purchasing their products, same with nVidia. I don't worry as much about things like this as I used to however because review sites have evolved over the years to remove the more absurd comparisons.

Hell, I remember when sites were doing stupid things like comparing the i7-920 to the Phenom II X4 940, a CPU that was meant to compete with the Core2Quad Q9400. The excuse given was "best vs. best" and I said "Sure, but we're talking about a platform price difference of over $500!". Things have clearly gotten better than those insane comparisons of old.
 
  • Like
Reactions: bit_user
Intel has at least once in the past disabled the ability to undervolt. Look up the "plundervolt" vulnerability. Basically around 7th and 8th gen CPUs it was discovered that under very specific conditions that most users would never encounter, undervolting allowed some kind of exploit. The solution: push a windows update preventing CPU from being set below stock voltage. I have a kaby lake in a laptop that was undervolted a good 0.2v, knocked a good 10°C off temps. One day it started running hotter and surprise! I can still overvolt it just fine though, I guess that's what matters for laptops. Essentially, as useful as undervolting can be, Intel doesn't see it as something worthwhile compared to "security."
I wish I knew how to undervolt for craptops. The problem is that I've never seen a craptop BIOS that allows that level of customisation. All of the ones I've owned were more or less locked into an "as-is" mode.
 
Paul's review is the first thorough and balanced for the i9 13900ks chip. He has acutally taken time to research the features and power limits etc. Most reviewers chuck it in a Z690/Z790 board, leave it at stock, and their observations are around "cor blimey guv'nor she's a hot one, and no mistake!" I have gotten some insane benchmarks from my i9 13900k and i9 13900ks systems. After benchmarking, I set the RAM and power limits to Intel spec and still whoop the pants of my my i9 12900k benchmarks, when I owned one previously - safe in the knowledge if I had a workload that was struggling, I could open up the taps some more. Stability over performance everytime, but as much performance as possible please :)
 
  • Like
Reactions: bit_user
320 watts!
Intel didn't specifically comment on Tejas's power issue, but Otellini confirmed that "thermal considerations" were at the root of what he called "a hard right-hand turn" in the company's roadmap.

"If we had not started making this right-hand turn, we would have run into a power wall," Otellini said. "You just can't ship desktop processors at 150 W."
-Intel President and CEO Paul Otellini on the cancellation of the Pentium V Tejas project in 2004, which had been expected to scale to 10GHz but ran into insurmountable power draw problems long before that. This of course is what led to the multi-core Core 2.
 
  • Like
Reactions: bit_user
320 watts!

-Intel President and CEO Paul Otellini on the cancellation of the Pentium V Tejas project in 2004, which had been expected to scale to 10GHz but ran into insurmountable power draw problems long before that. This of course is what led to the multi-core Core 2.
Exactly you can't ship a SINGLE CORE CPU that uses 150W just for one core.
The 13900 uses 32W for one core if not overclocked, 10W (about 25% )less than the 7950x .
320W is the headroom you have for overclocking on that chip, you pay all of that money for an unlocked CPU to have an unlocked cpu that you can push, the eco mode cpus from intel end in t not in ks. 13900t
power-singlethread.png
 
Exactly you can't ship a SINGLE CORE CPU that uses 150W just for one core.
That's a red herring, as it was still during the era of single-core CPUs. You're spinning like a top.

The 13900 uses 32W for one core if not overclocked, 10W (about 25% )less than the 7950x .
Depending on which benchmark you cherry-pick. The thing is, if you really care about single-threaded efficiency, then you'd probably get a smaller CPU, like the 7700X or the 7600X.
efficiency-singlethread.png

And if you care about multi-thread efficiency, then Intel simply has no answer for AMD:
efficiency-multithread.png
I guess that's why they're losing so much datacenter marketshare to AMD.
 
  • Like
Reactions: Avro Arrow
That's a red herring, as it was still during the era of single-core CPUs. You're spinning like a top.
That's why I pointed it out so clearly, it's not me spinning, it's bfg-9000 that talked about a single core thing as if it's relevant to the ks.
Depending on which benchmark you cherry-pick.
That's funny coming from you since you always cherry-pick only cinebench multithreaded and server workloads.
I was answering to bfg who was talking about single thread and you didn't skip a beat to talk about multi threading and server.
And if you look at the single thread cinebench pic that you posted then the 13900k even with unlimited power is still more efficient than the 7950x by a good amount, it's still above 25%
 
it's not me spinning, it's bfg-9000 that talked about a single core thing
Nope.

If multi-core x86 CPUs had existed back then, you might have a point.

And if you look at the single thread cinebench pic that you posted then the 13900k even with unlimited power is still more efficient than the 7950x by a good amount, it's still above 25%
In general, as you scale core counts, single-threaded efficiency drops. This is hardly news.

As I pointed out, someone primarily interested in optimizing efficiency of lightly threaded workload can't do better than Ryzen 5 7600X. If you're primarily interested in highly-threaded efficiency @ stock, you can't do better than Ryzen 9 5950X, though only because the PPT on the stock 7950X is quite so high.