News Intel Tiger Lake Allegedly Beats AMD Ryzen 4000 In Single-Thread Workloads

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Did I hear that they are back-porting 10nm architecture to 14nm? Is intell having a hard time with thermals or increased power consumption at smaller nodes?
correct. thermals and power consumption apparently are through the roof for 10nm.

furthermore yields are apparently so low on 10nm, expect this to be an entirely paper launch with limited to no availability.
 
  • Like
Reactions: gg83

bit_user

Titan
Ambassador
I'm pretty pissed that Intel is trying to argue that it can compete yet doesn't announce the tdp ratings.
This was a leak - not an Intel announcement. In fact, they haven't even announced the CPU at all.

What a load of ... ? So you want to compare a cpu (1165G7) with 15W tdp
They haven't said that - you're inferring. Probably right, but still a guess.
 

bit_user

Titan
Ambassador
Ok so intel should just make a single core cpu and juice the maximum out of it and claim best single thread performance, intel would be in the headlines again beating AMD.
I realize you're not being totally serious, but it wouldn't help anything. When cores are idling, they use almost no power. This is the idea behind different turbo limits for different numbers of cores running. So, if you're just running a single-thread workload, then a multi-core chip should be able to reach about the same performance levels as a single-core chip.

Granted, there are some overheads for multi-core, such as interconnects, cache coherency, distributed L3, etc., but I think they're pretty small.

Yeah. Make one giant 1inchx1inch core! Then they would hold the single core record. Lol. I dont know if it works that way but its funny.
Just for the sake of disagreeing with myself, I guess you could burn the extra die area on an enormous L3 cache.
 

PCWarrior

Distinguished
May 20, 2013
216
101
18,770
That depends. Is Intel rating the 1165G7 at 15W? If so, then that's base clock only, a misleading rating that Intel has been using for the past few generations in order to make their TDP numbers sound more reasonable than they really are.

But the specs are unconfirmed. So, how do you know the the 1165G7 is a 15W part?
The tdp is given by the benchmark quoted in this very article (look here). Besides there have been leaked slides (here) plus we know this is the successor of the 1065G7 which also has a TDP of 15W.

Unlike DIY motherboard vendors that disable power limits by default this is definitely not the norm for the vast majority of laptops. All OEMs and certainly the big 3 (Lenovo, Dell, HP) follow Intel spec to the letter so yeah when it says 15W it means 15W sustained and when it turbos it can go up to the PL2 limit (in laptops usually set to 25% higher than TDP) for a period equal to τ (usually set to 8-28s).

What is actually misleading is to say that Intel's TDP is the power consumption at base clocks. Sure it is technically true but with what load? It is an Intel-defined load that flips virtually every transistor on the cpu. And it is defined not for any processor but pretty much the worst cpu in the production line. So in reality for virtually all workloads, including heavy ones like Blender and Handbrake, the cpu ends up running at a frequency higher than base while still being at or below tdp.

And yes, the 1165G7's tdp can be configured down to 12W and up to 28W. But the AMD cpu (4800H) that the guy above demanded for a 'fair' in his view comparison is a 45W part (nominal) with configurable down TDP of 35W and configurable up TDP of 54W.
 

bit_user

Titan
Ambassador
The tdp is given by the benchmark quoted in this very article (look here). Besides there have been leaked slides (here) plus we know this is the successor of the 1065G7 which also has a TDP of 15W.
Very interesting. I wonder how they determine the TDP Down figure, and if it's reliable.

Actually, does that mean it's what the CPU is actually running, or just what it's capable of being confuged to run at?

What is actually misleading is to say that Intel's TDP is the power consumption at base clocks. Sure it is technically true but with what load? It is an Intel-defined load that flips virtually every transistor on the cpu. And it is defined not for any processor but pretty much the worst cpu in the production line. So in reality for virtually all workloads, including heavy ones like Blender and Handbrake, the cpu ends up running at a frequency higher than base while still being at or below tdp.
I'm pretty sure I've seen the opposite... Maybe it was with Prime95 or some such stress test, but I've definitely seen Intel CPUs burning far more than TDP, and I think the clocks were at base.
 

wownwow

Commendable
Aug 30, 2017
37
1
1,535
Got the point, but is the Core i7-1165G7 a 15W part, other the comparison is meaningless.

1.8GHz for AMD's 7nm part, but 2.8GHz for Intel's 14nm++, so Intel should just stay with 14nm++, no need for 10nm, 7nm, ...
 

bit_user

Titan
Ambassador
1.8GHz for AMD's 7nm part, but 2.8GHz for Intel's 14nm++, so Intel should just stay with 14nm++, no need for 10nm, 7nm, ...
The Intel part is 10+ nm.

And if AMD had gone with only 4 cores, they also could have used a higher base clock. Remember, the base clock is the minimum guaranteed clock for running all of the cores. So, in order to accommodate more cores, you need to lower the base.
 

King_V

Illustrious
Ambassador
What is actually misleading is to say that Intel's TDP is the power consumption at base clocks. Sure it is technically true but with what load? It is an Intel-defined load that flips virtually every transistor on the cpu. And it is defined not for any processor but pretty much the worst cpu in the production line. So in reality for virtually all workloads, including heavy ones like Blender and Handbrake, the cpu ends up running at a frequency higher than base while still being at or below tdp.

So, what you're basically saying is that Intel's given TDP is meaningless? I mean, you're saying my statement is misleading, while simultaneously saying it's true.

Then you seem to say, without providing any evidence, that it's based on the worst CPU in the production line, AND that it's at Intel holding the processor frequency to the base frequency while fully loading it at that frequency. This isn't anything close to an endorsement/confirmation of the validity of Intel's TDP claim.

But, again, as for these claims:
  • It's from max load at base frequency
  • It's from the worst CPU in the production line
  • The CPU in reality runs at or below TDP while running at a frequency higher than base*


*I am assuming you mean on all cores here. It would be foolish to try to claim this for a single core, then compare it to AMD's more realistic TDP.
 
Actually, you're incorrect.
Zen 2's IPC is higher than Intel's current CPU's on the market (but maybe not Tiger Lake's).
Intel only has a small advantage in games (roughly 10%) and that's mainly because they managed to get it to run at 5GHZ and just over that out of the box and because of a mesh design where there are no latencies between cores (Zen 2 greatly reduced those latencies... and Zen 3 will effectively eliminate them).

AMD's Zen 2 (thanks to its IPC) gets within 10% of Intel in games and SMASHES it in multicore performance while drawing LESS power (oh and Zen 2 also has far less security flaws).
Oh and, in productivity software, Intel's lead in single core is much lower than 10%... that alone shows you just how much better AMD's Zen 2 is.

As for the above article... looking at the frequencies, you are missing a few things:
Intel's Tiger Lake is clocked about 55% higher on single core... and only beats AMD by roughly 24% in that metric (which is quite frankly pathetic) while operating at unknown TDP levels.

The only reason Intel can clock their Tiger Lake part higher than AMD is because they have 50% less cores.
Oh and... AMD's 4800u is an 8c/16th part usually running at 15W TDP.
The Tiger Lake was configured to an unknown TDP - but previous leaks indicated 28W... (4800U can also go up to that cTDP, and the Lenovo Yoga Slim 7 will be using 4800u configured to 27w cTFP, which when tested showed to be 30% faster than the 15W variant).

So, yeah... Intel still cannot put 8c/16th CPU into a low power envelope it seems (AMD can).
Oh and Intel's graphics are still pathetic compared to enhanced Vega in Renoir (Zen 2).

Oh and Tiger Lake's IPC is an unknown as of yet. Intel boasted some rather high claims on that front, but the propensity of leaks thus far suggests they didn't get anywhere near what they were saying.
I was going to spell out how wrong you actually were, but since somebody else already did it I'll just leave it at that. Maybe, get your facts straight next time.
 

thGe17

Reputable
Sep 2, 2019
70
23
4,535
So with the Single Core Performance Metric:
AMD Ryzen 7 4800U @ Base/Boost = 1.8/4.2 GHz = Single Threaded [ . . . ]

Why or what in particular makes Tiger Lake U faster is mostly irrelevant ;-), because the package is simply faster (single-threaded).
Additionally that's no surprise, because already Ice Lake U was single-threaded as fast as a 4700 U and the i7-1065G7 has a 200 MHz slower boost clock than the Ryzen.

Some corrections:
  • The boost clock is the only thing that is relevant here, not the base clock, because with such a small workload (limited to single-thread), both CPUs should run the tested core with its maximum frequency, therefore the effective difference is only 500 MHz, not 1 GHz.
  • Tiger Lake already has been spotted multiple times with 4,7 GHz, therefore it is not speculative.
  • Intel used the same TDP ranges for the past for few years, therefore there will be 15 and 25 W models, additionally with configurable cTDP. But here, also the TDP is irrelevant, because a single-threaded workload will not max out these CPUs. The effektive limit is simply the boost frequency on the single core.
  • Besides minor architectural advancements (compared to Ice Lake/Sunny Cove), Tiger Lake (aka Willow Cove microarchitecture) increases L2 (per core) and L3 cache.

Therefore, as you have already calculated: +24,4 % more ST-perf according to PassMark, but only +11,9 % more clock speed and as already noted, TDP is irrelevant in ST, therefore the difference simply comes from architectural features. Quiet easy.

Multithread performance is another question, because here the system cooling and the (effective c)TDP is relevant for a proper comparison.
 

PCWarrior

Distinguished
May 20, 2013
216
101
18,770
I'm pretty sure I've seen the opposite... Maybe it was with Prime95 or some such stress test, but I've definitely seen Intel CPUs burning far more than TDP, and I think the clocks were at base.
What you might be remembering is Linus’ review here (watch from 5:12) of the 9980XE running AIDA64 with AVX-512. However as you can see, although the cpu was running at 2.8GHz (which is lower than the 3GHz base), it was actually only consuming 132W (which is also lower than the tdp of 165W). So the cpu wasn’t downclocking because of power or thermal limits but probably because there was something wrong with the motherboard early BIOS and the boosting/power management algorithm (as it was the case with the macbook pro 2018 watch here) or because the nature of the AIDA AVX512 workload didn’t really need higher clocks.

So, what you're basically saying is that Intel's given TDP is meaningless? I mean, you're saying my statement is misleading, while simultaneously saying it's true.
Then you seem to say, without providing any evidence, that it's based on the worst CPU in the production line, AND that it's at Intel holding the processor frequency to the base frequency while fully loading it at that frequency. This isn't anything close to an endorsement/confirmation of the validity of Intel's TDP claim.
What I am saying is that when Intel tests for power consumption and tdp they test the cpus operating at base speeds while made to run Intel’s defined load which is akin to a power virus. The vast majority of cpus will need less power than the tdp to pass this test. Only the worst cpus in the production line will draw as much power as the tdp and those that exceed it simply don’t pass the test. Realistically as no real workload comes close to such a power virus, the power consumption in real workloads will be lower than tdp when the cpu runs at base clocks. The cpu will therefore need to turbo boost to a higher frequency in order to draw as much power as the tdp.

Here is an article from Gamers Nexus. Below is the relevant graph.
2_intel-tdp-investigation-frequency-chart_asus-max.png


It shows the 9900K with the 95W TDP limit enforced, running a full Blender AVX workload for 23 minutes. The blue line is frequency (with the vertical axis being on the left) and the red line is the power with the vertical axis being on the right). As you can, at the beginning the cpu boosts to its all core turbo of 4.7GHz drawing around 140W. It does so for a period equal to τ=28s but then the power limit of 95W kicks in forcing the CPU to downclock. But look the frequency value it downclocks to. It is not the base of 3.6GHz but 4.2GHz. And power consumption is 94.9W (which not only by itself is lower than 95W but it is the power that is measured at the EPS12 cable thus including in it not just the cpu power but also the VRM losses (thus cpu power=94.9W-VRM losses))

The CPU in reality runs at or below TDP while running at a frequency higher than base. I am assuming you mean on all cores here. It would be foolish to try to claim this for a single core.
TDP is defined with all-cores active. All-core power consumption is generally higher than single-core power consumption. See below the chart for the 9900K (although do note that it is with a fixed voltage so lower clocks don't benefit from lower voltages which would reduce somewhat the higher-core power consumption). It should however be noted that with such small tdps as we have for the mobile parts and with such a huge difference between base and maximum clocks, cpus do get pushed close to tdp even for single threaded workloads.

In the chart below the 9900K needs 35W with only 1 core (2 threads) active operating at 5GHz an running Blender AVX. The Tiger lake part boosts to 4.7GHz (so a bit lower) and is also using the 10nm process (so 0.4x the power consumption but with probably 1.5x the transistors per core we get around 0.55x the power thus around 20W when it boosts to 4.7GHz on 1core/2threads and 15W on 1core/1thread (hyperthreading adds around 1/3 more power).
uXEuhRl.png
 

King_V

Illustrious
Ambassador
Wait, no, that drop off and increase in clock speed is after it completed, it even says beneath it:
For reference, here’s what the frequency functionality looks like when operating under settings with MCE disabled via XMP II and the “No” prompt. This plot takes place over a 23-minute Blender render, so this is a real-world use case of the i9-9900K. Notice that there is a sharp fall-off after about 30 seconds, where average all-core frequency plummets from 4.7GHz to 4.2GHz. Notice that the power consumption remains at almost exactly 95W for the entire test, pegged to 94.9W and relatively flat. This corresponds to the right axis, with frequency on the left. This feels like an RTX flashback, where we’re completely power-limited. The difference is that this is under the specification, despite the CPU clearly being capable of more. You’ll see that the frequency picks up again when the workload ends, leaving us with unchallenged idle frequencies.

The article itself even states how there's better performance because of the violations of Intel's rules, and how performance is punished for following Intel's spec.

So, doesn't this mean that, if Intel's spec, which Intel doesn't seem to be too enthusiastic about enforcing, is followed, then their performance (also, benchmarks) will suffer?

If their spec is followed, will they be able to hold on to the so-called (and quite overblown) "gaming performance" crown?

And that's with desktop systems. I would agree that laptops should be following things strictly, but, again, are they for this system? Or is single-thread the only thing they can do because keeping the rest of the cores minimally utilized is the only way they can take advantage of their high clocks and keep thermals at sane levels?


EDIT: Also, did you ever wonder why around the 8th gen or so that Intel suddenly changed their methodology on how they determined TDP? If they applied the new standard to their older chips, would they have lower values than were advertised back in their day (Kaby Lake and earlier)? If they applied the same standard to Ryzen, would Ryzen's TDP be lower as well?
 
Last edited:
And that graphic, is not correct either. Doesn't even show full boost accurately, so those TDP readings are questionable, at best, completely false at worst.

I'll trust Igor's findings LONG before anything I might see at "Hardware unboxed". Hell, even the 8700k used more power than that at full boost.

tdauREAwqYvHhsTjDHesSH-650-80.png
 

spongiemaster

Honorable
Dec 12, 2019
2,364
1,350
13,560
And that graphic, is not correct either. Doesn't even show full boost accurately, so those TDP readings are questionable, at best, completely false at worst.

I'll trust Igor's findings LONG before anything I might see at "Hardware unboxed". Hell, even the 8700k used more power than that at full boost.

tdauREAwqYvHhsTjDHesSH-650-80.png
I don't think you're following what PCWarrior is saying. The vast majority of enthusiast targeted motherboards and probably 100% of the boards that end up as test benches for new CPU's on websites do not adhere properly to the Intel TDP guidelines so you end up jack up up power measurements like the one you linked. A board that properly adheres to the standard will look like the first graph that PCWarrior posted from Gamers Nexus. The all core boost of a 9900k is 4.7Ghz, so if you were talking about the Gamers Nexus graphic, it shows the boost behaving exactly as it should. 28 seconds of 4.7Ghz, then it drops to whatever clock speed it can to stay under 95W.
 

MasterMadBones

Distinguished
[...] AMD's more realistic TDP.
AMD's TDP in mobile works similarly to Intel's. There's a maximum power limit for a limited amount of time, called PPT-Short and another one for a longer period called PPT Long. PPT-Long, unlike Intel's PL1, does not have to be equal to the TDP. In addition to that mobile APUs have a feature called Skin Temperature Aware Power Management or STAPM, which calculates the average power of all components in a device over a longer period of time. Once it reaches a limit that is configured by the manufacturerer based on the chassis design, the APU will clock down to whatever clock speed it needs to stay under the STAPM limit. This means that a 15W APU with a 25W PPT-Long and a 15W STAPM limit will consume more power than an Intel CPU for a longer time, but eventually it will start to consume less power than its TDP, in my experience after 3-5 minutes or so.

AMD's TDP on desktop is utterly meaningless by the way. PPT is what matters. At least it's clearly defined for each TDP level.
 
And all of this, this entire discussion almost, is EXACTLY why there needs to be a NEW standard related to the maximum potential power draw possible, under ANY condition, regardless of test methodology, for a given CPU when measured at full all core steady state load. Call it whatever you want, there needs to be something that represents the highest possible power draw possible on every CPU across the board for unquestionable direct comparison.

The reason you will never see that though, is because these companies allow their marketing and mouthpiece employees to run things rather than giving their project managers and engineers the ability to designate and adhere to common sense, universal standard methodologies and nomenclature that can't be twisted to make everything look like innovation.

It's fine to have these other profiles, designs and methods, like STAPM and PPT, but they can't and shouldn't take the place of a one-standard measurement system that is unaffected by conditional behaviors. If the most the CPU can EVER pull safely is 125w, when the maximum possible standard steady state load on all cores, then that is what it should say, and it should not incorporate any conditions that make it a subjective measurement. There SHOULD be an unchangeable standard that resists marketing interpretations.
 
  • Like
Reactions: King_V

MasterMadBones

Distinguished
Except that their draw in real-world use is much closer to the TDP stated than what Intel does. So, by your definition, Intel's TDP is worse than "utterly meaningless."
AMD will use up to PPT indefinitely, whereas Intel defines a time limit (Tau) for PL2, after which the CPU will drop down to exactly its TDP.

The main reason for the large discrepancy in TDP and actual power draw is not Intel's specification, but what motherboard manufacturers do with it. Many motherboards have unlimited power draw out of the box. Only Comet Lake has some CPUs with a fairly high PL2 and long Tau.

If you want numbers consistent with Intel's spec, you should look at Gamers Nexus reviews. They measure power consumption under short (CB R20) and sustained (Blender) loads.