Question why high end cpu have lower frequency?

Oct 13, 2019
6
0
10
this might be a dum question and someone probably asked this before. But i really wonder why those "High End" CPU have lower frequency, an example would be like a 2K$ threadripper with only 2ghz base speed and can only turbo to 3ghz while an 200$ i7 have base speed of 3.5ghz and can turbo to 5+ghz
 
this might be a dum question and someone probably asked this before. But i really wonder why those "High End" CPU have lower frequency, an example would be like a 2K$ threadripper with only 2ghz base speed and can only turbo to 3ghz while an 200$ i7 have base speed of 3.5ghz and can turbo to 5+ghz
Apples and oranges, you just can't compare CPUs like that. Not by frequency alone anyway.
 
this might be a dum question and someone probably asked this before. But i really wonder why those "High End" CPU have lower frequency, an example would be like a 2K$ threadripper with only 2ghz base speed and can only turbo to 3ghz while an 200$ i7 have base speed of 3.5ghz and can turbo to 5+ghz

High-end CPU's, e.g. Threadripper and Intel's Xeons, are generally targeted at professional markets and not enthusiast market. They have features to support that with massive connectivity and memory capacity which will generate a lot more heat internal to the processor, and so get somewhat more conservative specifications to go along with that.

And there's this too: professionals demand absolute reliability and stability. They will start processes, like a complex scientific or financial simulation for instance, that will run for weeks at a time on the computer installed in a chilled 'server' room (to keep it cool) and on an UPS (to maintain stable power) throughout the duration of the task. So everything is designed around that kind of drive for reliable stability.
 

InvalidError

Titan
Moderator
High core count CPUs have lower base clocks for two main reasons: it is more difficult to create silicon able to sustain faster clocks across more cores and it is also necessary to keep TDPs manageable. If you slapped four 3800X-class CCDs under the same IHS, you'd need a cooling solution able to handle 400+W TDP.
 
Are we perhaps looking at 32 core 2990X variants, attempting to compare them to mainstream 6 and 8 core variants? If we increase by 4x the core count, the power usage and heat would be ~4x as well, so, lower base clocks control heat, keeping TDP /power usage reasonable....)

When INtel infamously ran a 28 core sample at 5 GHz, it not-so-cleverly semi-hid the small chilling unit used for cold-fluid liquid cooling, capable of handling 1000 watts of heat. Actual testing in some mainboards showed upwards of 500-650 watts just thru the CPU behemoth... If 8 cores at at 5 GHz hits ~165 watts power draw, you can count on 28 cores to roughly scale accordingly...(about 575 watts peak)
 
If the Ryzen 3900X is any indication with one CCD often clocking like a 3600/3600X and the other more like a 3800X to hit the max boosts, ThreadRippers may be more along the line of one 3800X+ CCD for max boost clocks and 3700X bin for the others.
I'm not too sure a strategy that can work for what is a comparitively low-cost desktop processor will work for Threadripper. TR's proper target users will do things that expect all cores perform similarly. For intance, I have to think running multiple virtualized machines could end up with one being an unpredictably gimped shadow of the other if cores aren't fairly well matched. Being unpredictable, such performance scenarios can be extremely problematic.

AMD used to claim TR gets 'the best of the best' (binned) CCD's. Agreed that's probably not so true now as Rome would get them (mostly) but I'm still pretty sure they'll try to balance the system as well as possible.

I'm also of a mind that TR's target market (high-cost HEDT) along with Rome's (enterprise servers) is where AMD is probably making the greatest progress and impact. These are rational buyers making huge purchase decisions, often involving thousands of machines at a time, based on logical performance and cost metrics. I'm very certain AMD is interested in making good decisions to influence these buyers.
 
Last edited:
this might be a dum question and someone probably asked this before. But i really wonder why those "High End" CPU have lower frequency, an example would be like a 2K$ threadripper with only 2ghz base speed and can only turbo to 3ghz while an 200$ i7 have base speed of 3.5ghz and can turbo to 5+ghz
2 big reasons:
  1. different architecture so 5GHZ here is not 5GHZ there. Pentium 4 could run on 5GHZ (even 8Ghz), but it was weaker even then than current i5.
  2. Thermal headroom. As you get more cpu's they fight for power & heat dissipation. if you compare 32 core to 4/6 core, and they don't even have twice the TDP, so some cores get lower power so high prority ones can boost as high as needed.
 
And you'll get all cores performing similarly... at some lower-than-max-boost clock when more than a handful of cores all conveniently located in the best CCD are simultaneously active.
Which begs the question; why put different binned die on the CPU in the first place? I don't think they'd need to play a specsmanship game with those CPU's from looking at how they're spec'd.

Someone else, in other posts, has suggested TR and Rome will be binned for different things than desktop. They suggested that TR and Rome, being very densely packed, would get dies that run cool at low voltages. Whether or not that equates to a 3800x-class CCD (since it might leave the margin necessary for crazy-high boosting to 4.5Ghz) I can't say. But what if it does also describe a 3800X? All I can say is most 3800x's run quite a bit cooler, and at lower voltage, than my 3700x.
 

TRENDING THREADS