News Intel Launches $699 Core i9-13900KS, the World's First 6 GHz CPU: Available Now

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

RichardtST

Notable
May 17, 2022
222
233
960
I just respond to this with "P cores are big cores, Zen cores are middle cores and E cores are little cores. Why is the cutoff for good cores at middle cores and not big cores? Why is there a cutoff at all if they all contribute?"
Because "they all contribute" is only valid in a fully parallel workload, which 99% of the real world is not. Why not use a rack of cheap 2mhz cores instead? Because you still need reliable full-power cores to get the little jobs done in a reasonable amount of time. How much faith do you have that OS schedulers can guess which things need to get done quickly, and which do not? "None" is the only correct answer.
 

TJ Hooker

Titan
Ambassador
Also techpowerup says that: "Temperatures are based on delta T, normalized to 25°C. " anybody that specialized in thermodynamics that can explain what this means?!
The way I understand it, they measured the temperature delta (CPU temp minus ambient), then added 25 ("normalized to 25 C") to that value to get the values in the chart. E.g. let's say they measured a 13900K temp of 110C with an ambient of 18C, for a delta of 92. Then they add 25, to get the published result of 117C, which is supposed to represent the temperature you'd see if you were running the same test in a 25C room.

I think it would have been better to just report the delta values. Because the way they did it results in values that don't really make sense, e.g. the 13900K operating at 117 C (when in reality it would throttle, or probably shutdown outright if necessary, before ever reaching that temp).
 

rluker5

Distinguished
Jun 23, 2014
468
270
19,060
Because "they all contribute" is only valid in a fully parallel workload, which 99% of the real world is not. Why not use a rack of cheap 2mhz cores instead? Because you still need reliable full-power cores to get the little jobs done in a reasonable amount of time. How much faith do you have that OS schedulers can guess which things need to get done quickly, and which do not? "None" is the only correct answer.
W11 is pretty good with Intel.
And since Zen cores are less performant than P cores does that mean that they are worthless?
They are still better than FX cores and most older ones. Same with E cores that are admittedly slower, smaller, and consume less power.
That argument against E cores applies vs Zen cores
 
  • Like
Reactions: KyaraM
Intel said it was coming so no surprises there. I guess is not a surprise also the amount of power it may use depending on the load and the cooling available.
The binning process helps you (among others) to save the best chips for the highest performance parts, and it helps you find which of those can go even above expectations. But as on everything in tech theres a point of dimishing return, at least +26% more wattage than the i9 13900K, if the right work load, power delivery and cooling capacity are meet.

Also lol, What better time to announce it than after dozens of reviewers around the globe talked for days about the future Ryzen 7xxx X3D?

I guess next month will be very interesting in the CPU deparment.
 
Last edited:

truerock

Distinguished
Jul 28, 2006
292
38
18,820
It's still only 8 full-speed cores. The others are just garbage. I'll take AMD's 16 real cores any day. Go home Intel. You're toast.

I agree.
low performance cores in a desktop PC CPU is a very stupid idea.
Almost as stupid as an internal GPU in a top end desktop PC CPU.
I don't know... which is stupider?

But, I've noticed that Apple is putting a couple of low performance cores in their desktop PC CPUs. So, maybe a couple of low performance cores are OK - but, 16 low performance cores seems really stupid.

But, I'm still trying to figure the idea of mixed core types out.
 

cyrusfox

Distinguished
Sep 24, 2009
395
209
19,190
But, I've noticed that Apple is putting a couple of low performance cores in their desktop PC CPUs. So, maybe a couple of low performance cores are OK - but, 16 low performance cores seems really stupid.

But, I'm still trying to figure the idea of mixed core types out.
They are great for encoding, Windows does do a poor job scheduling them, Process Lasso fixes this. But main task usually always is on the P cores.

Hardware Unboxed had to use the AC LF II 420 and it still throttled under full workload. The be quiet! 360 was actually unable to keep up at all.

View: https://youtu.be/UNDxKQP1_FQ
That's an Engineering Sample, yes maybe it is the same as final silicon, but there is no way to know... Need to wait for real reviews, good way to get views but low confidence on the results due to testing on an engineering sample.
 
  • Like
Reactions: KyaraM and truerock

DavidLejdar

Prominent
Sep 11, 2022
181
96
660
Much like the high end AMD chips - this isn't where the market is and very few people need it. Nonetheless, the fanboys on both sides will gear up with narratives about how efficiency is now more important that capability, or power is more important than efficiency.

Meanwhile, millions of sensible gamers will buy a processor like the 13400/7600 that's £200, meets their needs, plays everything at a realistic refresh rate for the next few years and never look at his again, before doing the same thing in three years time.

Instead of yapping about what £600+ processor with a gazillion scores can hit what clock speed, I'd much rather see articles on how millions of productivity users and gamers can get the best bang for buck over the next three years and with what hardware combos.
I have a 7600 here, and it sure runs fine. I do get 100% usage on it at times though (gaming at 1440p), so I appreciate there are some sources online, which take a look under the hood, so to speak. And especially the benchmark results provide a picture of the performance differences one can expect, even if it is not always as extensive as I'd like to know about stuff I am now measuring myself within what I can measure, such as which game loads how much data into the system memory at once (and making a video about this topic).

This isn't to say that there is no point in providing build suggestions, and recently there was e.g. an article on here, which talked about a budget build for 1440p gaming. But even leaving aside price fluctuations, when the question is: "What is a good build for PC gaming?", then the counter-question usually is: "Which gaming in particular?". And the answers to that are a wide range, going from
  • individual titles (where the minimum and recommended specs are listed on the game page),
  • genres (where someone playing a city builder may benefit more from putting emphasis on CPU performance),
  • "whichever game may be released this year" (about which it isn't really clear at this point which of these will utilise DirectStorage v1.1 already, and how much it will diminish the role of a CPU, if at all),
  • to which resolution and FPS is actually expected.

And at least personally, I would find it quite difficult to provide a suggestion of a "build which fits every gaming scenario at the lowest cost", let alone for the next three years, as there are a number of variables, as I tried to point out. One can slim it down a bit though - such as that PC ports of Xbox and PS5 releases will not likely have a higher hardware performance requirement as these consoles have. But then again, perhaps someone releases a game primarily for PC, which makes a lot of use of CPU and SDRAM, and a "console-level build" may not be good enough for such a game.
 

King_V

Illustrious
Ambassador
You can easily power limit it. If you power limit both 13900k and 13900ks to the same 253W, the ks chip will still outperform the k chip by a noticeable margin.
How so? Even if given the full power, it's only 0.2 Ghz more.

That's only a 3.4% increase from the K model.

You think that giving it LESS headroom, power-wise, thus LESS than the 3.4% increase (and that's NOT on all cores) will give a NOTICEABLE increase?

How? Assuming you're COMPLETELY bound by the CPU, who do you know that can tell the difference between 100fps and 103fps?
 
  • Like
Reactions: Elusive Ruse

rluker5

Distinguished
Jun 23, 2014
468
270
19,060
I agree.
low performance cores in a desktop PC CPU is a very stupid idea.
Almost as stupid as an internal GPU in a top end desktop PC CPU.
I don't know... which is stupider?

But, I've noticed that Apple is putting a couple of low performance cores in their desktop PC CPUs. So, maybe a couple of low performance cores are OK - but, 16 low performance cores seems really stupid.

But, I'm still trying to figure the idea of mixed core types out.
AFAIK : P cores main thread > Zen cores main thread > E cores > Zen cores SMT thread > P cores HT thread.
Should we disable SMT/HT?
 

TJ Hooker

Titan
Ambassador
I agree.
low performance cores in a desktop PC CPU is a very stupid idea.
Almost as stupid as an internal GPU in a top end desktop PC CPU.
I don't know... which is stupider?

But, I've noticed that Apple is putting a couple of low performance cores in their desktop PC CPUs. So, maybe a couple of low performance cores are OK - but, 16 low performance cores seems really stupid.

But, I'm still trying to figure the idea of mixed core types out.
E cores are more efficient in terms of performance per watt and performance per die area. So for a given power envelope (which does still matter for desktops), and a given die size, E cores offer greater total performance that P cores. You just need enough P cores to handle whatever latency sensitive processes you're running. Once you have those to handle lightly threaded loads, adding more E cores really seems to be the best way to add more performance.

Of course, you need a scheduler smart enough to put assign the right processes to the right cores. I have no experience with how well that's currently working for Alder/Raptor lake.
 

truerock

Distinguished
Jul 28, 2006
292
38
18,820
E cores are more efficient in terms of performance per watt and performance per die area. So for a given power envelope (which does still matter for desktops), and a given die size, E cores offer greater total performance that P cores. You just need enough P cores to handle whatever latency sensitive processes you're running. Once you have those to handle lightly threaded loads, adding more E cores really seems to be the best way to add more performance.

Of course, you need a scheduler smart enough to put assign the right processes to the right cores. I have no experience with how well that's currently working for Alder/Raptor lake.

So, I pay $0.10 to $0.12 per KWh for electricity for my desktop PC. I really, really, really don't care how much electricity it uses or how much heat it generates.
 

truerock

Distinguished
Jul 28, 2006
292
38
18,820
AFAIK : P cores main thread > Zen cores main thread > E cores > Zen cores SMT thread > P cores HT thread.
Should we disable SMT/HT?

I've never been a huge fan of SMT/HT.
I guess it works in some situations.

Intel doesn't use SMT/HT in its Xeon processors - correct?
 

BeedooX

Reputable
Apr 27, 2020
70
51
4,620
Most of the CPU's are really very decent these days, but I'm not particularly impressed by overclocking the crap out of / upping the voltage to claim these highs. At least with VCache - whilst it is just bolting on more cache, it's a design that's outside of plain old overclocking.
 
  • Like
Reactions: cyrusfox
Because if nothing else, I can count on Windows to put my one timing-critical-speed thread on one of the slower cores. Every time. Without fail. Would you buy a computer with a thousand 200mhz z80s? Even if the total work done by all the threads combined is higher? How about a rack full of raspberry pies? No, of course not. I program in real-time, and I push my cores to their limits. Last thing I need is more cheap gimmicks.
https://www.techpowerup.com/review/amd-ryzen-9-7950x/26.html
If you have 16 real cores you still have hugely different speeds between running single core and having all cores loaded.
In the case of ryzen you have the cross ccd lag on top of that and AMD also is forced to use a slightly lower binned second ccd.
So at the end of the day even with 16 real cores you can still count on Windows to put your one timing-critical-speed thread on one of the slower cores. Every time. Without fail.

Also if your thread is so critical it would be launching with a high priority which would put it on the preferred core, which is a thing for years now.
If you have any say on how that thread launches you can also create an affinity mask for it so it will always launch on the fastest core you have.
 
  • Like
Reactions: KyaraM

ottonis

Reputable
Jun 10, 2020
158
124
4,760
"That's good for all of us, even though it does increasingly come at the cost of prodigious power consumption from both players."

Not really.

The 4-core Core i7 Sandybridge (i2700k) from 2011 did have a TDP of 95W.
AMDs 7900 (12c) provides a TDP of 65W, despite having multifold higher MT performance.

AMD shows what rational chip design is.
On the other hand, Intel squeezing some 5-10% more performance at astronomically higher power usage, is just hilarious.
I don't see any target group for this KS CPU variant. Gamers don't really need it, professional rendering will rather take last Gen Threadrippers and Servers will like the Epyc and Xeons.

This KS however, is only good for benchmarks and nothing else.
 
AMD shows what rational chip design is.
On the other hand, Intel squeezing some 5-10% more performance at astronomically higher power usage, is just hilarious.
Quite the opposite, AMD isn't rational, they just have a low power server part that they are trying desperately to make it work as a desktop part.

With a EKWB EK-AIO Elite 360 D-RGB 360mm,
the 7950x can't even reach the power limit set by AMD because it already hits the ~95 degree heat limit.
The 13900k reaches 330W, and stays well below the temp limit, at 86 degrees.
So which company pushes their CPUs more?
The one that tries to make their CPUs use more power than they can handle or the one that leaves a good 30% power headroom?
(If people would actually respect the limit that intel sets. )
You are not supposed to use a desktop part for server workloads that would make your CPU run at 100% power all the time anyway.
130462.png
130799.png


I don't see any target group for this KS CPU variant. Gamers don't really need it, professional rendering will rather take last Gen Threadrippers and Servers will like the Epyc and Xeons.
Professionals like designers, architects, video editors and so on, they do need the grunt for the final renders but they also have to deal with effects or calculations that only run single threaded so if that can go even a little bit faster it's going to be a good reason for them to get an KS.
 

abbujan

Reputable
Mar 12, 2019
2
0
4,510
Because if nothing else, I can count on Windows to put my one timing-critical-speed thread on one of the slower cores. Every time. Without fail. Would you buy a computer with a thousand 200mhz z80s? Even if the total work done by all the threads combined is higher? How about a rack full of raspberry pies? No, of course not. I program in real-time, and I push my cores to their limits. Last thing I need is more cheap gimmicks.
If you don't mind me asking, what do you program in realtime to push those kinda loads. I'm a programmer myself, a different kind of though.
 
Quite the opposite, AMD isn't rational, they just have a low power server part that they are trying desperately to make it work as a desktop part.

With a EKWB EK-AIO Elite 360 D-RGB 360mm,
the 7950x can't even reach the power limit set by AMD because it already hits the ~95 degree heat limit.
The 13900k reaches 330W, and stays well below the temp limit, at 86 degrees.
So which company pushes their CPUs more?
The one that tries to make their CPUs use more power than they can handle or the one that leaves a good 30% power headroom?
(If people would actually respect the limit that intel sets. )
You are not supposed to use a desktop part for server workloads that would make your CPU run at 100% power all the time anyway.
130462.png
130799.png



Professionals like designers, architects, video editors and so on, they do need the grunt for the final renders but they also have to deal with effects or calculations that only run single threaded so if that can go even a little bit faster it's going to be a good reason for them to get an KS.
You're not reading that graph correctly, as it clearly states 253W for the Intel 13900K.

Look at HardwareUnboxed's numbers on cooling for it for all-core workloads using a 420 (!) AIO:
View: https://youtu.be/UNDxKQP1_FQ?t=249


And it wasn't even using the 320W, "only" 280W.

This being said, and as I said in another comment: AMD clearly has a problem with the IHS and they're not willing to accept it, which irks me.

EDIT: Really interesting in the context!
View: https://www.youtube.com/watch?v=OGp-XQJ-JW8


Regards.
 
Last edited:
  • Like
Reactions: Roland Of Gilead

salgado18

Distinguished
Feb 12, 2007
888
292
19,370
Quite the opposite, AMD isn't rational, they just have a low power server part that they are trying desperately to make it work as a desktop part.

With a EKWB EK-AIO Elite 360 D-RGB 360mm,
the 7950x can't even reach the power limit set by AMD because it already hits the ~95 degree heat limit.
The 13900k reaches 330W, and stays well below the temp limit, at 86 degrees.
So which company pushes their CPUs more?
The one that tries to make their CPUs use more power than they can handle or the one that leaves a good 30% power headroom?
(If people would actually respect the limit that intel sets. )
You are not supposed to use a desktop part for server workloads that would make your CPU run at 100% power all the time anyway.
130462.png
130799.png
This graph says two things:
  1. AMD has a harder time sticking to the set TDP than Intel, and since it goes above that, it also produces more heat.
  2. Intel does output less heat, but what is the performance of that heat/power? Is it using E-cores? Is it as fast as the 7950X? I highly doubt it, but in any case, without performance numbers at these TDP settings, this is a partial information and not very useful.
Edit: found the original article, and in every single benchmark (one example below), the Ryzen is a lot faster than the i9 at every TDP except stock settings, which means it needs to rely heavily on E-cores to keep power/heat low. The big.LITTLE architecture that Intel used is a mess, and their cores are just bad compared to AMD's, even if one of them (the P-cores) has very fast peak performance.

130507.png


Professionals like designers, architects, video editors and so on, they do need the grunt for the final renders but they also have to deal with effects or calculations that only run single threaded so if that can go even a little bit faster it's going to be a good reason for them to get an KS.
Also professionals aim for efficiency, since they use their computers for a long time, and rendering uses up a lot of power. If I'm not mistaken, the Ryzens have better performance/watt on renders. And shaving off 0.2s of a tool is not a good investment.

Those who need a lot of single-threaded power will benefit greatly from going Intel, while full-core efficiency is better served by AMD. That said, the only reason to get such a power-hungry chip for an extra 0.2GHz (that still needs exotic cooling to achieve) is bragging rights.
 
Last edited:
  • Like
Reactions: ottonis

jkflipflop98

Distinguished
Because if nothing else, I can count on Windows to put my one timing-critical-speed thread on one of the slower cores. Every time. Without fail. Would you buy a computer with a thousand 200mhz z80s? Even if the total work done by all the threads combined is higher? How about a rack full of raspberry pies? No, of course not. I program in real-time, and I push my cores to their limits. Last thing I need is more cheap gimmicks.

I mean if you really are a programmer, much less a super programmer that programs cutting-edge processes that puts the most bleeding edge hardware on the planet to it's limits - one would think you'd know how to force your ohmygoshitssocritical thread onto the fastest core possible anyways. Maybe such a command doesn't exist. Maybe you should program it.
 
  • Like
Reactions: KyaraM

salgado18

Distinguished
Feb 12, 2007
888
292
19,370
Because if nothing else, I can count on Windows to put my one timing-critical-speed thread on one of the slower cores. Every time. Without fail. Would you buy a computer with a thousand 200mhz z80s? Even if the total work done by all the threads combined is higher? How about a rack full of raspberry pies? No, of course not. I program in real-time, and I push my cores to their limits. Last thing I need is more cheap gimmicks.
You can force thread affinity in .NET, at least. But you are probably hitting a power/heat wall, is your PC well cooled? Intel P-cores are very power-hungry, if you push it too hard it just gives up on them and goes to E-cores.
 

TRENDING THREADS