News Per-core thermal throttle reportedly coming to next-gen Intel CPUs — Arrow Lake leverages Fast Throttle for enhanced overclocking

King_V

Illustrious
Ambassador
So, is this a sort of like PWM for CPU cores? It's either on at full speed or off?

The cynic in me wonders if this is a way for Intel to advertise that they never throttle back block speeds.
 
So, is this a sort of like PWM for CPU cores? It's either on at full speed or off?

The cynic in me wonders if this is a way for Intel to advertise that they never throttle back block speeds.
What's a back block speed?!
Also they would have to advertise that their CPUs just shut off basically, how is this going to be any good advertisement? At least low speeds are speeds, off is no speed at all.
 
  • Like
Reactions: KyaraM

NinoPino

Respectable
May 26, 2022
496
310
2,060
Terrible article. Seems to be written from AI.
Starting with the fact that the article says nothing new because (according to the content of the article) the described feature is already present (identical) in 14th gen processors.
The fact that "Fast Throttle debuted on 13th Generation Raptor Lake processors" is repeated almost identical three times (as the readers cannot remember what they read some seconds ago).
At the start of article you say "It provides an alternative (more performant) method of temperature-related throttling..." and you contradict this in the conclusion saying "...has not been proven to be decisively better than all other thermal throttling mechanisms".
On the functional description of Fast Throttle, there are a lot of false affermations.
You wrote "Clock modulation is a technique that turns the physical CPU clock on or off to change a chip's performance and power consumption.". The technique is "Fast Throttle" (not clock modulation) that through a "clock modulation" signal permit to turn on/off the clock of the single cores (not the CPU). The physical CPU clock is not affected at all.
You wrote "Clock modulation performs the same capabilities as frequency/voltage changes". This is simply false as yourself say in the article.
 

JRStern

Distinguished
Mar 20, 2017
178
67
18,660
Yah I couldn't make any sense out of this either.
How is it done? Little thermometers, or monitoring power consumption and assuming the temperature effects? If it "turns off the clock" for how long, is it going to hang the programs, what?
 

kjfatl

Reputable
Apr 15, 2020
216
157
4,760
This method of clock control makes a lot of sense in a system with multiple processor cores running at different effective speeds utilizing common resources such as cache. Based on the description I read in the article, the base clock used by a group of processors along with shared resources such as cache can run at a common rate. Power reduction is obtained by using clock enables per core. Instead or running a clock enable to each register in the design, the clock for the core is modified to have fewer edges, but the edges are all still synchronous with the common system clock. This is particularly useful if the processor nominally runs at a faster rate than the cache. Leaving out the details, it eliminates the need for most synchronization logic in the data path since everything is running from a common clock. It also allows for very fine grain speed reductions.
 
  • Like
Reactions: KyaraM

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
My take on this is that Intel is simply copying what AMD is doing already and all this obfuscation is about not quite naming and shaming Intel for being a copycat in all these things like modular chips and extremely fine grained clock turbos that fluidly adapt to all manner of operating constraints.
 

kjfatl

Reputable
Apr 15, 2020
216
157
4,760
My take on this is that Intel is simply copying what AMD is doing already and all this obfuscation is about not quite naming and shaming Intel for being a copycat in all these things like modular chips and extremely fine grained clock turbos that fluidly adapt to all manner of operating constraints.
 

kjfatl

Reputable
Apr 15, 2020
216
157
4,760
I don't see this as a negative. What Intel is doing is "standard design practice". First you get an architecture to work, then you refine it. I wouldn't call this extremely fine grained but brute force and cheap. Extremely fine grained takes a lot more silicon. This will happen too when power is shut off to parts of a core based on the instruction being executed. It is highly likely that this is occurring as well. Both AMD, Intel and now NVIDIA play off each other and push the window.
A couple of decades ago, Apple was about to go bankrupt. Steve Jobs approached Bill Gates and asked for a handout. His pitch was, "You(Microsoft) need us as competition or your product will get stale and someone else will take over the market. Bill Gates agreed and purchased a large sum of non-voting stock which kept Apple in business.
 

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
I don't see this as a negative. What Intel is doing is "standard design practice". First you get an architecture to work, then you refine it. I wouldn't call this extremely fine grained but brute force and cheap. Extremely fine grained takes a lot more silicon. This will happen too when power is shut off to parts of a core based on the instruction being executed. It is highly likely that this is occurring as well. Both AMD, Intel and now NVIDIA play off each other and push the window.
A couple of decades ago, Apple was about to go bankrupt. Steve Jobs approached Bill Gates and asked for a handout. His pitch was, "You(Microsoft) need us as competition or your product will get stale and someone else will take over the market. Bill Gates agreed and purchased a large sum of non-voting stock which kept Apple in business.
I fully agree, for the consumers it's a good thing that Intel is following AMD's lead here.

Now, if only they could just say so publically then writers like Mr. Klotz (klutz in German), wouldn't get into such a bind in trying to explain that Intel is no longer doing their original type of throttling, but what AMD has been doing for years, while making it sound like they invented something new themselves.

Clearly Intel wants writers to push a message, but also in a certain way, which results in writers not doing their very best.

And I guess we'll have similar stories about how Lunar Lake is doing something fantastic and new, even if they are simply following the fruity cult there.

Intel is no longer a leader but desperately wants to look like one, and above all, not risk looking as if they are actually behind.

Mr. Klotz is paying the price for those gyriations and getting roasted for it.

I guess it pays the rent...
 
Last edited:
My take on this is that Intel is simply copying what AMD is doing already and all this obfuscation is about not quite naming and shaming Intel for being a copycat in all these things like modular chips and extremely fine grained clock turbos that fluidly adapt to all manner of operating constraints.
Your take is wrong as AMD does not use modulation to control clocks and Intel isn't switching to this exclusively. Intel is adding more fine grained clock controls to Lion Cove and finer voltage controls which is like AMD, but this is absolutely not that.
 
  • Like
Reactions: KyaraM

King_V

Illustrious
Ambassador
What's a back block speed?!
Also they would have to advertise that their CPUs just shut off basically, how is this going to be any good advertisement? At least low speeds are speeds, off is no speed at all.
They would have to mention it somewhere, I imagine. But they wouldn't have to advertise the "shut off" part of it.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,630
502
2,060
Intel is no longer a leader but desperately wants to look like one, and above all, not risk looking as if they are actually behind.

Mr. Klotz is paying the price for those gyriations and getting roasted for it.
I guess that's why amd is literally copying and has been copying intels naming schemes to a T for both motherboards and CPUs for the last what now, 7 years?
 

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
I guess that's why amd is literally copying and has been copying intels naming schemes to a T for both motherboards and CPUs for the last what now, 7 years?
Technology and market leadership tend to go along for a long time, mostly because being a technology leader allows you to push research harder, too. Intel was able to do that for a long time, until they fumbled the fab.

And during that time AMD (and there used to be other contestants) tried to follow Intel's naming schemes to establish a comparison base line, that Intel then tried hard to escape, because at those points they typically lost out.

And btw. that's been the case for a very long time, AMD produced an 8080 clone called 9080, 386 and 486 battled alongside etc. so this has been going on for fourty years and if I remember correctly my first AMD CPU was an AMD Am486-DX2 80 introduced thirty years ago in 1994 (my first Intel was an 80286 in 1985).

I was exclusively AMD during the entire Socket 7 period and came back to Intel with their Haifa based mobile designs and the Core architecture e.g. the Q6600. Ran Intel (most) and AMD (few) side-by-side until Intel just had nothing reasonable to offer since Zen 3.

Well, actually I have quite a few low-power Intels, Atoms or mobile chips until Alder-Lake, but with Strix Point, Ryzen and EPYC Intel seems behind at just about every front.

What's changed is that Intel is not just momentarily behind, like when x86 went 64-bit, but has been behind for several generations. And it's no longer just clock speeds, but fundamental architectural ideas.

And once scale is no longer on their side and with new competition flaring up outside x86, they need for better ideas to remain in the game.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,630
502
2,060
Technology and market leadership tend to go along for a long time, mostly because being a technology leader allows you to push research harder, too. Intel was able to do that for a long time, until they fumbled the fab.

And during that time AMD (and there used to be other contestants) tried to follow Intel's naming schemes to establish a comparison base line, that Intel then tried hard to escape, because at those points they typically lost out.

And btw. that's been the case for a very long time, AMD produced an 8080 clone called 9080, 386 and 486 battled alongside etc. so this has been going on for fourty years and if I remember correctly my first AMD CPU was an AMD Am486-DX2 80 introduced thirty years ago in 1994 (my first Intel was an 80286 in 1985).

I was exclusively AMD during the entire Socket 7 period and came back to Intel with their Haifa based mobile designs and the Core architecture e.g. the Q6600. Ran Intel (most) and AMD (few) side-by-side until Intel just had nothing reasonable to offer since Zen 3.

Well, actually I have quite a few low-power Intels, Atoms or mobile chips until Alder-Lake, but with Strix Point, Ryzen and EPYC Intel seems behind at just about every front.

What's changed is that Intel is not just momentarily behind, like when x86 went 64-bit, but has been behind for several generations. And it's no longer just clock speeds, but fundamental architectural ideas.

And once scale is no longer on their side and with new competition flaring up outside x86, they need for better ideas to remain in the game.
Yada yada, I keep hearing how far behind Intel is. So one would assume that their brand new R7 9700x should blast the i7 14700k right? But in reality it will be losing to the old 13700k. Heck, i'm not even sure it will be faster in MT to 2021's 12700k. But yeap, Intel is so far behind, rofl.
 

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
Yada yada, I keep hearing how far behind Intel is. So one would assume that their brand new R7 9700x should blast the i7 14700k right? But in reality it will be losing to the old 13700k. Heck, i'm not even sure it will be faster in MT to 2021's 12700k. But yeap, Intel is so far behind, rofl.
I am very happy with you and many others buying Intel.

If only because it keeps AMD on their toes and enables more choices.

So please don't let me distract you!
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,630
502
2,060
I am very happy with you and many others buying Intel.

If only because it keeps AMD on their toes and enables more choices.

So please don't let me distract you!
But that doesn't answer my question. Since intel is so far behind, SURELY the brand new shining R7 9700x will obliterate both the 13700k and the 14700k, right? There is no way it will be aroundabout the same MT performance as the 12700k, right?

How can your statement about intel being far behind be true when AMD's latest CPUs compete with Intel's 3-4 year old ones? Enlighten me, please.
 

KyaraM

Admirable
I guess that's why amd is literally copying and has been copying intels naming schemes to a T for both motherboards and CPUs for the last what now, 7 years?
Don't forget hybrid big.LITTLE CPUs which AMD are now using after Intel, too. Which technically comes from CPUs commonly used in phones I guess, but iirc 12th gen was a novum for desktop x64 chips and there is no denying that Intel was first here. Plus, their chiplets are different from AMDs. It's not 1:1 as people claim.
 

abufrejoval

Reputable
Jun 19, 2020
615
454
5,260
But that doesn't answer my question. Since intel is so far behind, SURELY the brand new shining R7 9700x will obliterate both the 13700k and the 14700k, right? There is no way it will be aroundabout the same MT performance as the 12700k, right?

How can your statement about intel being far behind be true when AMD's latest CPUs compete with Intel's 3-4 year old ones? Enlighten me, please.
I have no responsibility to make you happy or to enlighten you.

AMD started chiplets years ago, only to be ridiculed by Intel who then caught up and did something very similar, years late and so far without impressive results. It has allowed AMD to offer 16 P-core chips at desktop economy and thermal budgets, while delivering significant IPC gains generation over generation. They simply had the better fab and the separation into CCDs and IODs allowed them to make deliver more for less money. That's why I started replacing Xeons with Ryzens.

AMD started continuous clock adaptions in 25 MHz increments and very short sampling intervals with an ample range of limit sensors to enable maximum performance out of each individual core of any given SoC without risking unsafe operations. AMD also made it a standard feature across the entire product line. Intel tried makeing this i9 exclusive, had people operate far beyond recommended and safe power levels and is now getting to a similar level of sophistication, again years later. They obliterated safe and sustained operations, not the competition. AMDs approach has allowed me to stay with affordable, quiet and safe air cooling, yet get the most performance out of that for several generations. Intel failed to do better and lure me back so far.

Geekbench is an attempt to get meaningful and comparable results across a wide range of performance metrics within a very short amount of time.

It is two numbers, which by no means can give a full report of what modern SoCs can do and how they will perform across a broad range of workloads in real-life.

One of the biggest shortcomings is that it will measure each workloads only for seconds and with cool-down intervals in between. That's great to measure absolute peak, especially on mobile systems, where it can help you to estimate responsiveness.

But it offers little indication on how various larger or even sustained workloads will operate, because you can't condense that into two numbers any more, when the main constraints are thermal and energy budgets and CMOS performance curves are far from linear.

If you believe that oblitering Geekbench scores by a few percent is sufficient to justify buying Intel, please do.

I'm in it for the money. I earn my living with my home lab and game only occasionally on the side. I've switched between Intel and AMD for more than 30 years now and the most important incentive was always performance for the money. I've extensively measured my Xeons and Ryzens and know that Zen has delivered far more performance at much less invest of money and power in the workstation bracket for multiple generations now. And even 5800U notebooks were already rather excellent in terms of efficiency, while Strix Point looks to be quite a lot better.

In high-end EPYC servers, Intel just hasn't been able to compete for almost 10 years now.

E-cores have been smart move to compensate for P-cores that were too power hungry for their performance. But if you can have P-cores on an E-core power and cost budget, that's still better. Again AMD was much smarter to use C-cores to add extra cores in energy constrained setups, Intel's big focus to improve those formerly Atom-level E-cores to near-but-not-quite P-core levels confirm the wisdom of AMDs approach, even if they cannot match it yet.

But feel free to not believe me, ignore me and just buy Intel.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,630
502
2,060
I have no responsibility to make you happy or to enlighten you.

AMD started chiplets years ago, only to be ridiculed by Intel who then caught up and did something very similar, years late and so far without impressive results. It has allowed AMD to offer 16 P-core chips at desktop economy and thermal budgets, while delivering significant IPC gains generation over generation. They simply had the better fab and the separation into CCDs and IODs allowed them to make deliver more for less money. That's why I started replacing Xeons with Ryzens.

AMD started continuous clock adaptions in 25 MHz increments and very short sampling intervals with an ample range of limit sensors to enable maximum performance out of each individual core of any given SoC without risking unsafe operations. AMD also made it a standard feature across the entire product line. Intel tried makeing this i9 exclusive, had people operate far beyond recommended and safe power levels and is now getting to a similar level of sophistication, again years later. They obliterated safe and sustained operations, not the competition. AMDs approach has allowed me to stay with affordable, quiet and safe air cooling, yet get the most performance out of that for several generations. Intel failed to do better and lure me back so far.

Geekbench is an attempt to get meaningful and comparable results across a wide range of performance metrics within a very short amount of time.

It is two numbers, which by no means can give a full report of what modern SoCs can do and how they will perform across a broad range of workloads in real-life.

One of the biggest shortcomings is that it will measure each workloads only for seconds and with cool-down intervals in between. That's great to measure absolute peak, especially on mobile systems, where it can help you to estimate responsiveness.

But it offers little indication on how various larger or even sustained workloads will operate, because you can't condense that into two numbers any more, when the main constraints are thermal and energy budgets and CMOS performance curves are far from linear.

If you believe that oblitering Geekbench scores by a few percent is sufficient to justify buying Intel, please do.

I'm in it for the money. I earn my living with my home lab and game only occasionally on the side. I've switched between Intel and AMD for more than 30 years now and the most important incentive was always performance for the money. I've extensively measured my Xeons and Ryzens and know that Zen has delivered far more performance at much less invest of money and power in the workstation bracket for multiple generations now. And even 5800U notebooks were already rather excellent in terms of efficiency, while Strix Point looks to be quite a lot better.

In high-end EPYC servers, Intel just hasn't been able to compete for almost 10 years now.

E-cores have been smart move to compensate for P-cores that were too power hungry for their performance. But if you can have P-cores on an E-core power and cost budget, that's still better. Again AMD was much smarter to use C-cores to add extra cores in energy constrained setups, Intel's big focus to improve those formerly Atom-level E-cores to near-but-not-quite P-core levels confirm the wisdom of AMDs approach, even if they cannot match it yet.

But feel free to not believe me, ignore me and just buy Intel.
Who said anything about Geekbench? What are you freaking talking about?

I asked you something very specific, since intel is so far behind as you keep saying, surely the new R7 9700X will slap goth the 13 and the 14700k in performance metrics, right? Surely it won't barely match an i7 from 2021, right??