News Intel Tiger Lake-U CPU Speeds Past AMD R7 4800U in Graphics Benchmark

InvalidError

Titan
Moderator
enjoy, even if you are Intel fan, this is happening only because AMD pressurize Intel.
A good chunk of this is only happening because Intel has had nothing but problems getting 10nm up to speed with it still not being up to par with where Intel wanted it to be three years ago.

Had everything gone according to Intel's plans when Ice Lake first hit roadmaps around the Broadwell era, Zen would have launched against Ice Lake instead of Sky Lake 3rd-gen and AMD would have been crushed from making almost no net headway against Intel.

Intel deserves some credit for screwing up 10nm so AMD could survive long enough to make itself relevant again :)
 

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
61
290
A good chunk of this is only happening because Intel has had nothing but problems getting 10nm up to speed with it still not being up to par with where Intel wanted it to be three years ago.

Had everything gone according to Intel's plans when Ice Lake first hit roadmaps around the Broadwell era, Zen would have launched against Ice Lake instead of Sky Lake 3rd-gen and AMD would have been crushed from making almost no net headway against Intel.

Intel deserves some credit for screwing up 10nm so AMD could survive long enough to make itself relevant again :)
Problems seem to be well behind them and the window for AMD to make significant market inroads is diminishing. Intel needs to keep AMD around - otherwise the charges of monopoly starts rearing it's head. Starting to look like the time when Core/Conroe dropped - which is just short of 14 years ago.

Looks like the gains from TGL over ICL are significant. Really looking forward to Rocket Lake S - hoping for this year, but will hold off a little depending on when the Ampere based GeForce "3080Ti" is released - and what a Rocket Lake S refresh will look like.

I am waiting for the 25/28W TGL that will go into the NUC. I Will consider it a good year if I can replace all various generations of NUCs with Tiger Lake NUC11. 2021 will likely see wide availability of ICL SP - I will be looking at initially replacing my Engineering workstations (dual socket 3647 - single 8180) with a single socket ICL SP. Servers can most likely wait until the end of 2021.
 
Problems seem to be well behind them and the window for AMD to make significant market inroads is diminishing. Intel needs to keep AMD around - otherwise the charges of monopoly starts rearing it's head. Starting to look like the time when Core/Conroe dropped - which is just short of 14 years ago.

Looks like the gains from TGL over ICL are significant. Really looking forward to Rocket Lake S - hoping for this year, but will hold off a little depending on when the Ampere based GeForce "3080Ti" is released - and what a Rocket Lake S refresh will look like.

I am waiting for the 25/28W TGL that will go into the NUC. I Will consider it a good year if I can replace all various generations of NUCs with Tiger Lake NUC11. 2021 will likely see wide availability of ICL SP - I will be looking at initially replacing my Engineering workstations (dual socket 3647 - single 8180) with a single socket ICL SP. Servers can most likely wait until the end of 2021.
I've seen nothing concrete (from Intel -- not fanboy wish lists) to suggest Tiger Lake will be a Conroe repeat. Conroe was a solid 25% [EDIT: more like 40-50%!] boost in performance over the best NetBurst architecture, plus used something like half the power (or 2/3 the power). It was an absolutely massive change. Nehalem was also a fairly sizeable jump -- mostly because it integrated the memory controller.

Tiger Lake and Willow Cove should be better than Ice Lake, but that's no big deal -- it would be horrible if they weren't. But so far, we're hearing about larger L3 cache, PCIe 4.0 support, and some new instructions (that will almost never be used outside of certain HPC type workloads or specially optimized benchmarks). The biggest change appears to be in graphics, which is mostly going to be a change of 64 EUs to 96 EUs.

So based on that information, Tiger Lake will be more like the old Clarkdale -> Sandy Bridge, or perhaps Arrandale -> Sandy Bridge. Where Intel integrated graphics went from garbage HD Graphics to slightly less garbage HD Graphics 3000. Or perhaps it will be like Haswell -> Broadwell, where the CPU was more power efficient and Intel went big on integrated graphics with Iris Pro.

Let me go on the record here and now and state that in general performance, if Intel can do better than a 15% IPC improvement, I will be impressed (and more than a bit surprised). [EDIT: I will also be very surprised if overall CPU performance for the same TDP and core count is more than 15% faster than Ice Lake.] I've seen claims of 30% better IPC, which I think is fantasy land. Guess we'll find out "this summer" if I'm right or wrong.
 
Last edited:

InvalidError

Titan
Moderator
Problems seem to be well behind them
10nm still isn't performing up to expectations and Intel has given up on developing 10nm beyond higher-margin applications, I wouldn't count that as problems being "behind" it. Rocket Lake is still 14nm++++ and unless Intel pulled an architectural miracle, its added Willow Cove complexity is going to cost a few hundred MHz in achievable and sustainable clocks, which will cancel out a good chunk of its IPC gains just like how Sunny Cove's complexity cost Ice Lake 300-400MHz vs Coffee Lake on Intel's troublesome 10nm.
 

DavidC1

Distinguished
May 18, 2006
493
67
18,860
I've seen nothing concrete (from Intel -- not fanboy wish lists) to suggest Tiger Lake will be a Conroe repeat. Conroe was a solid 25% boost in performance over the best NetBurst architecture, plus used something like half the power (or 2/3 the power). It was an absolutely massive change. Nehalem was also a fairly sizeable jump -- mostly because it integrated the memory controller.

Conroe was 35-45% faster not 25%. It was 90% faster than Presler per clock.

Nehalem was underwhelming for client because the gains in single thread wasn't that much, other than for the Turbo. In certain cases it actually regressed.

Let me go on the record here and now and state that in general performance, if Intel can do better than a 15% IPC improvement, I will be impressed (and a bit surprised). I've seen claims of 30% better IPC, which I think is fantasy land. Guess we'll find out "this summer" if I'm right or wrong.

They won't get 15% perf/clock(IPC is actually a misleading term to lots of people). Something like 5-7% is in line.

The big thing about Tigerlake on the CPU side is the 10nm+ is a much improved version, and engineers have done work on the layout to improve frequency.

Leaks have a 4.7GHz device which is a 20% higher frequency.
 
Conroe was 35-45% faster not 25%. It was 90% faster than Presler per clock.
I suppose I'm thinking more along the lines of Core 2 Duo E6400 vs. top Pentium D, because that's what I bought back in the C2D days. (I couldn't justify buying one of the extreme edition chips.) I do remember it was a massive jump, but I didn't remember it being quite so high. Still, looking back at AnandTech's review, it was indeed closer to 40-50% faster. It was basically like AMD's FX to Zen transition. Maybe I was thinking more of the Core 2 Quad Q6600 to Core i7-920 change, which is something else I did. I remember that being a relatively large 25% gain on average (give or take -- some tests where much higher gains, others smaller).

Anyway, compared to recent architectural upgrades (SNB->IVB, IVB->HSW, HSW->SKL), the Conroe and Nehalem launches were way more impactful to your typical user. These days, you mostly see gains in highly threaded apps.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
I don't think so. As the article points out, Renoir still uses Vega. Hopefully, its successor will use RDNA or even RDNA2 (unlikely, if I'm honest).

Anyway, all this does is take away integrated graphics as a differentiator of AMD's APUs, leaving them near-parity. However, for laptop buyers who really care about graphics performance, they're going to get a model with a dGPU, regardless.

So, it's not good news for AMD, but I wouldn't say it's worrisome - especially as it doesn't even concern their latest tech.
 

bit_user

Polypheme
Ambassador
Intel deserves some credit for screwing up 10nm so AMD could survive long enough to make itself relevant again :)
Ever heard the saying: "Fortune favors the prepared mind." ? It's kinda like that.

Yeah, Intel messed up, big time. But stuff happens. And in business, a lot comes down to whether and how well you can capitalize on competitors' missteps and weaknesses. If AMD hadn't turned their ship around and really gotten into the game with a decent uArch, even Intel's recent troubles wouldn't save them.

AMD hasn't been without its missteps, either. Let's not forget their (predictable) overshoot of the mining boom, for instance.

The real challenge will be for AMD to keep its momentum, and build on its good fortunes of late.
 
  • Like
Reactions: martinch

bit_user

Polypheme
Ambassador
Problems seem to be well behind them
If that were true, they wouldn't have just launched another 14 nm desktop generation.

2021 will likely see wide availability of ICL SP - I will be looking at initially replacing my Engineering workstations (dual socket 3647 - single 8180) with a single socket ICL SP. Servers can most likely wait until the end of 2021.
Is that really worthwhile? What do you hope to gain - core count or single-thread perf? Based on what we've so far seen, I'm skeptical on the single-thread perf (higher IPC, but lower clocks) and I think core-count will be down as well? And if you really needed more cores, why not just populate your second socket?

I'm used to a typical business upgrade cycle of 3-5 years, but I don't know how CPU-bound your workstation users are. Typically, single-generation upgrades are rarely worthwhile.

If you really needed lots of multi-threaded performance, your best bet would be Threadripper 3990X (or its successor). I know you said you evaluated previous-gen thread rippers, but the current gen is a completely different beast.
 

mcgge1360

Reputable
Oct 3, 2017
116
3
4,685
I assume neither of them are running on battery when they got this score?
Why does this matter in a graphics benchmark? If someone is intensely gaming on it it's going to be plugged in. If you're trying to push power draw into these comparisons it's already done, the cooling system and configurable TDP is what's limiting the processors.
 

mcgge1360

Reputable
Oct 3, 2017
116
3
4,685
If that were true, they wouldn't have just launched another 14 nm desktop generation.
This is actually wrong.

They did that because 14nm++ performs better than 10nm. In desktops, power consumption isn't nearly as big of a problem as it is in laptops, hence why they use 10nm in laptop SKUs. If Intel used 10nm in desktops right now you probably wouldn't be seeing an improvement in performance at all, only lower power draw. Any nobody would buy a new processor JUST for that reason.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Hmm..... the problem is that those are laptop processors. So, how they are tested plays a massive role. 4800U has nominal 15W TDP but can be increased to 25W or even reduce to 10W. From my experience with Intel NUC, their U series can do that. They even can increase beyond 25W for brief period of time.

These low power processors are limited by power rather than actual performance.
 
This is actually wrong.

They did that because 14nm++ performs better than 10nm. In desktops, power consumption isn't nearly as big of a problem as it is in laptops, hence why they use 10nm in laptop SKUs. If Intel used 10nm in desktops right now you probably wouldn't be seeing an improvement in performance at all, only lower power draw. Any nobody would buy a new processor JUST for that reason.
If your 10nm process is worse than your 14nm process, then objectively that is a BIG problem. We're not talking about TSMC 10nm LP here, this is Intel's 10nm tech, intended for CPUs that are supposed to be good. So yeah, if Intel can't make 10nm work better than 14nm++, then the 10nm problems are still present. Of course Ice Lake Xeon chips are in the works, so presumably there's at least some benefit to be had (density?)
 

bit_user

Polypheme
Ambassador
They did that because 14nm++ performs better than 10nm.
As Jarred said, this is part of what we mean when we talk about Intel's problems. Intel didn't intend this to be the case, it's just what they managed to salvage from their 10 nm debacle.

In desktops, power consumption isn't nearly as big of a problem as it is in laptops, hence why they use 10nm in laptop SKUs.
Cool story, but it completely misses the fact that Comet Lake (yes 14 nm) first launched in laptops, and it did so after 10 nm Ice Lake laptop chips had been on the market for a decent amount of time.

If Intel used 10nm in desktops right now you probably wouldn't be seeing an improvement in performance at all, only lower power draw.
AMD saw both benefits, when they moved to 7 nm (which is probably more like 10 nm, if we're being honest). You don't consider it a problem that Intel couldn't?

And even if you're not concerned about desktop power consumption, do you not agree that Intel requiring expensive liquid cooling to even max their upper-end Comet Lake CPUs' stock settings is an issue? That adds cost and complexity, which steers some customers down-market or to AMD solutions.
 

bit_user

Polypheme
Ambassador
Conroe was 35-45% faster not 25%. It was 90% faster than Presler per clock.
If you're not talking per-clock and you aren't specifying benchmarks or specific model numbers, then specific numbers become rather meaningless.

Even per-clock performance comparisons are highly benchmark-specific, since any SSE-heavy workloads got a huge boost from the doubling of their internal vector ALUs from 64 to 128-bits. So, a SSE-centric benchmark should easily show near 2x IPC gains. But, that wasn't all workloads.

Unless you really want to dredge up ancient history, let's just all agree that Conroe was fast, and a real turning point for Intel.

Nehalem was underwhelming for client because the gains in single thread wasn't that much, other than for the Turbo. In certain cases it actually regressed.
That's not how I remember it. Again, somewhat pointless to litigate.
 

DavidC1

Distinguished
May 18, 2006
493
67
18,860
If you're not talking per-clock and you aren't specifying benchmarks or specific model numbers, then specific numbers become rather meaningless.

Even per-clock performance comparisons are highly benchmark-specific, since any SSE-heavy workloads got a huge boost from the doubling of their internal vector ALUs from 64 to 128-bits. So, a SSE-centric benchmark should easily show near 2x IPC gains. But, that wasn't all workloads.

Only a handful had to do with SSE, most couldn't care less. Go back and look at the benchmarks. The gains are pretty uniform. Actually Anandtech bench shows less gains for encoding benchmarks than it does in something like Sysmark 2014 which is general purpose.

There was a really good site back then that did all the comparisons. It came out to be 90% per clock in average over a suite of 30 something applications. Of course Pentium D clocked higher and had HT hence the advantage was less.

That's not how I remember it. Again, somewhat pointless to litigate.

It was fine if you were after productivity or something but back then games barely took advantage of dual cores. Most usages weren't that multi-threaded either.

When I look for uarch advantages I do everything to isolate it so its truly a uarch test. One site turned HT and Turbo off and the average was only 5-10% in ST. Very few gained 15%+ probably due to the increased memory bandwidth. Realworldtech guys did testing and even found regressions for ST workloads.

The surprise for Nehalem in client was that the integrated memory controller brought small gains. For Athlon 64 the gains were massive, something like over 2/3rds of the 30% perf/clock gains were due to the IMC.

It's probably because the Core uarch hid memory latencies so well that there wasn't much to gain by moving to the IMC. Indeed the memory latency tests showed just that - noticeable, but not big gains.

That was the disappointment. A lot of us took the figure from Athlon 64 and thought immediately we'd get 20-30% gains in ST. No, we got a fraction of that. We COULD get that but only with with disclaimers and fine prints.

In servers the workload is much larger so its harder to fool and hits the memory subsystem, hence it was fantastic.

Conroe was for client, Nehalem was for servers.

Another fantastic client chip was Sandy Bridge. 15-20% faster per clock(uarch), lowered power, increased clocks, Turbo improved vastly(especially in laptops) and overclocked super well. The much improved graphics making it usable was an icing on the cake. And while Nehalem regressed on the battery life side, Sandy Bridge more than made up, and even made high end laptops look efficient.
 
Last edited:

TRENDING THREADS