News Intel Tiger Lake-U CPU Speeds Past AMD R7 4800U in Graphics Benchmark

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
When I look for uarch advantages I do everything to isolate it so its truly a uarch test. One site turned HT and Turbo off and the average was only 5-10% in ST. Very few gained 15%+ probably due to the increased memory bandwidth. Realworldtech guys did testing and even found regressions for ST workloads.
Regarding this, I want to point out that it's sort of impossible to truly focus in on the microarchitecture of a CPU if you enable/disable features or change clocks -- all of the stuff like Turbo Boost, Hyper-Threading, AVX, SSE, and even clock speed is part of the architecture. I get what people are trying to do when they 'normalize' clocks so that everything runs at the same speed (eg, 3.0GHz), but that often penalizes architectures that are specifically designed to scale to higher clocks.

It's fine to look at for curiosity's sake, but it's not the main thing. The main thing ultimately is performance and efficiency, in the workloads that someone actually plans to use -- which of course varies by individual. It's very easy to get lost in the bushes and write thousands of words on what does/doesn't happen, when in practice most people run at the default settings -- and that's what they should do, ideally. If you're overclocking, or trying to set a record for whatever benchmark, that's all fine. But trying to compare NetBurst at 2.4GHz to Conroe at 2.4GHz to Nehalem at 2.4 GHz very much muddies the waters if you're not careful.

There are plenty of apps (games in particular) where disabling Hyper-Threading or SMT will improve performance. However, virtually no one is going to reboot, enter the BIOS, disable SMT, save and exit, and restart Windows just to get a small boost in performance for one game ... only to have to repeat the process to reenable the feature. I remember all the people complaining about how Turbo Boost was 'cheating' because it was impossible to say for sure how fast a CPU would clock on a consistent basis (which wasn't entirely true). Ten years later, everything uses boost behavior because it's just the smart way of optimizing performance and power.
 
  • Like
Reactions: bit_user

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
If your 10nm process is worse than your 14nm process, then objectively that is a BIG problem. We're not talking about TSMC 10nm LP here, this is Intel's 10nm tech, intended for CPUs that are supposed to be good. So yeah, if Intel can't make 10nm work better than 14nm++, then the 10nm problems are still present. Of course Ice Lake Xeon chips are in the works, so presumably there's at least some benefit to be had (density?)

Not really, because smaller need not better. Although there are benfits of going smaller, there are problems too. These problems become even more apparent. Thats why different technologies are needed. Now we have pretty much reach what could be final level, GAA.
 

bit_user

Polypheme
Ambassador
The surprise for Nehalem in client was that the integrated memory controller brought small gains. For Athlon 64 the gains were massive, something like over 2/3rds of the 30% perf/clock gains were due to the IMC.

It's probably because the Core uarch hid memory latencies so well that there wasn't much to gain by moving to the IMC. Indeed the memory latency tests showed just that - noticeable, but not big gains.

That was the disappointment. A lot of us took the figure from Athlon 64 and thought immediately we'd get 20-30% gains in ST. No, we got a fraction of that. We COULD get that but only with with disclaimers and fine prints.
In the meantime, cache sizes increased enough that memory bandwidth had ceased to be a major limiting factor of most client/gaming workloads.

In servers the workload is much larger so its harder to fool and hits the memory subsystem, hence it was fantastic.
More cores and more threads definitely puts more strain on the memory subsystem.
 

bit_user

Polypheme
Ambassador
Not really, because smaller need not better.
Except that it was designed to be better! Intel fully intended and expected it to have replaced 14 nm, as the dominant manufacturing process, years ago.

This caused them loads of problems, eventually having to go back and add 14 nm capacity, after dragging their feet. They missed customer commitments and starved out whole market segments, because of this.

Although there are benfits of going smaller, there are problems too.
TSMC and Samsung have proven that Intel didn't hit a fundamental wall - they just failed. And if Intel had hit a hard limit, then they wouldn't be bringing up 7 nm and 5 nm after that.

If you want to continue this exchange, you should really do some reading and find out what you've been missing about this saga. You're awfully thin on the facts.
 
  • Like
Reactions: TJ Hooker

InvalidError

Titan
Moderator
In the meantime, cache sizes increased enough that memory bandwidth had ceased to be a major limiting factor of most client/gaming workloads.
Until prefetch becomes practically perfect, we'll still see edge cases such as 0.1% lows where memory latency tweaking can still yield 10-30% improvements. Larger caches may be nice but at the end of the day, they are still only a compromise due to the technical infeasibility of similarly fast RAM.
 
  • Like
Reactions: Rdslw and bit_user
A good chunk of this is only happening because Intel has had nothing but problems getting 10nm up to speed with it still not being up to par with where Intel wanted it to be three years ago.

Had everything gone according to Intel's plans when Ice Lake first hit roadmaps around the Broadwell era, Zen would have launched against Ice Lake instead of Sky Lake 3rd-gen and AMD would have been crushed from making almost no net headway against Intel.

Intel deserves some credit for screwing up 10nm so AMD could survive long enough to make itself relevant again :)
I don't think its 10nm alone, also igpu had same issues.
you had like 9'th gen, 9'th gen, 9'th gen,9'th gen, 9'th gen, 9'th gen, 11 gen!

so I think "10" is cursed number (Microsoft have it too nr. 10 as a problematic kid)
 

bit_user

Polypheme
Ambassador
edge cases such as 0.1% lows where memory latency tweaking can still yield 10-30% improvements. Larger caches may be nice but at the end of the day, they are still only a compromise due to the technical infeasibility of similarly fast RAM.
Yeah, though if the corner cases are rare enough, speeding them up doesn't have much impact on overall performance.

Plenty of benchmarks have been carried out to measure the impact of memory performance. Back in the quad-core days, the only iGPUs could stress memory enough to show a meaningful benefit.

Now that core-counts are increasing, memory is again becoming relevant.
 
  • Like
Reactions: Rdslw

bit_user

Polypheme
Ambassador
I don't think its 10nm alone, also igpu had same issues.
you had like 9'th gen, 9'th gen, 9'th gen,9'th gen, 9'th gen, 9'th gen, 11 gen!

so I think "10" is cursed number
It's a good point. It would be interesting to hear what happened to their Gen10 graphics. However, their Gen11 has been shipping for over a year now, I think? Only at 10 nm, though.

Maybe Gen10 was also slated to launch at 10 nm, but by the time they could finally ship any 10 nm silicon in volume, 11th gen was already set.
 
It's a good point. It would be interesting to hear what happened to their Gen10 graphics. However, their Gen11 has been shipping for over a year now, I think? Only at 10 nm, though.

Maybe Gen10 was also slated to launch at 10 nm, but by the time they could finally ship any 10 nm silicon in volume, 11th gen was already set.
Gen10 was Cannon Lake graphics, except the only Cannon Lake CPU to ship (i3-8121U) had to disable the graphics. Either it wasn't working properly, or yields were so bad that Intel gave up on trying to get working CPU+GPU for the Cannon Lake "launch."

I don't know how accurate they are, but there were reports/rumors/analysis that when Intel released Cannon Lake, yields were about 7-8%. Meaning, even with a small ~70 mm2 die size, which would mean Intel could get about 830 chips per wafer, the initial 10nm process was so fubar that only 60-70 chips could be salvaged (after disabling the GPU).

I'm not sure if the Cannon Lake die was physically 2-core, or if it was 4-core with two cores disabled. I'm assuming it was 2-core, but everything about it says Intel realized its initial 10nm was not going to work and so it had to come up with a "solution" so that it could "ship" 10nm in late 2017.
 
  • Like
Reactions: bit_user
Except that it was designed to be better! Intel fully intended and expected it to have replaced 14 nm, as the dominant manufacturing process, years ago.
Yes before ZEN released intel was totally prepared to shift production to 10nm ,but then ZEN released and could only compete by selling large amount of cores at cut throat prices so intel did the only logical thing and kept selling 14nm by just adding more cores, very slowly, because that's the only thing intel had to do.
Even now with ZEN2+ ,or whatever it is, the per core performance is still in favor of intel.
This caused them loads of problems, eventually having to go back and add 14 nm capacity, after dragging their feet. They missed customer commitments and starved out whole market segments, because of this.
Yup their 14nm was selling that well against ZEN that they had to add even more 14nm FABs.
https://www.statista.com/statistics/263565/intels-net-income-since-2004/
They missed customer commitments and starved out whole market segments, because of this.
Which market segments did they starve out? Because from the link it looks like they flooded all the markets.They made 60% more money than their previous highest year (2011) .
TSMC and Samsung have proven that Intel didn't hit a fundamental wall - they just failed. And if Intel had hit a hard limit, then they wouldn't be bringing up 7 nm and 5 nm after that.
Maybe they will be bringing up 7 nm and 5 nm the same way they brought up 10nm as in whenever AMD gets they stuff together and finaly makes a decent enough core for intel to finaly change from 14nm.
 

bit_user

Polypheme
Ambassador
Yes before ZEN released intel was totally prepared to shift production to 10nm ,but then ZEN released and could only compete by selling large amount of cores at cut throat prices so intel did the only logical thing and kept selling 14nm by just adding more cores, very slowly, because that's the only thing intel had to do.
Do you have any sources to back this up, or is it just your impression of what happened?

Even now with ZEN2+ ,or whatever it is, the per core performance is still in favor of intel.
Not IPC and not per-Watt, but in absolute performance.

I've argued this before: AMD is not actually trying to win the single-thread performance crown. AMD's objective is power-efficiency, and merely to be competitive in single-thread performance. That's what matters most, for servers and laptops. Those are the real money-makers, even if gaming PCs get a lot of headlines.

Yup their 14nm was selling that well against ZEN that they had to add even more 14nm FABs.
https://www.statista.com/statistics/263565/intels-net-income-since-2004/
They had to add 14 nm capacity because their 10 nm were delayed from being able to pick up capacity. Also, they had to increase die size to compete with AMD, which further crunched their capacity. Finally, the demand was strong for all CPU brands, due to global macro-econimic conditions. A rising tide lifts all boats, as they say.

If it merely sold well because Zen sold poorly, then AMD's revenues wouldn't have jumped. And let's not forget that 1st gen Zen was actually made on an inferior process node, even though it was nominally called 14 nm.

Which market segments did they starve out? Because from the link it looks like they flooded all the markets.They made 60% more money than their previous highest year (2011) .
Primarily low-end, though even supply of server CPUs was tight for a while. Dude, you have a lot of opinions based on very little information. This is a news site, after all. If you're going to take such a counter-narrative position, you could do a better job of researching the subject, if you didn't actually follow the news in the first place.

I could go on...

In fact, at one point their shortage of Cascade Lake Xeons was so bad that they started steering server customers to Skylake Xeons and even EPYCs. That was taken from an official Intel memo to its sales channel partners. ...ah, found it:


Maybe they will be bringing up 7 nm and 5 nm the same way they brought up 10nm as in whenever AMD gets they stuff together and finaly makes a decent enough core for intel to finaly change from 14nm.
They'll bring them online as soon as they can, not least because AMD isn't their only competitor.

https://www.tomshardware.com/news/intel-10nm-7nm-process-leadership-xe-graphics-gpu-dg1
 
Last edited:

thisisaname

Distinguished
Feb 6, 2009
800
438
19,260
Why does this matter in a graphics benchmark? If someone is intensely gaming on it it's going to be plugged in. If you're trying to push power draw into these comparisons it's already done, the cooling system and configurable TDP is what's limiting the processors.
Well it is supposed to be a mobile processor so for me yes it does matter :)
 
Do you have any sources to back this up, or is it just your impression of what happened?
We have decades of previous behavior for intel and they always only did the bare minimum,switching to 10nm would not have been the bare minimum.
Do you have any sources to back your side up, or is it just your impression of what happened?
Not IPC and not per-Watt, but in absolute performance.

I've argued this before: AMD is not actually trying to win the single-thread performance crown. AMD's objective is power-efficiency, and merely to be competitive in single-thread performance. That's what matters most, for servers and laptops. Those are the real money-makers, even if gaming PCs get a lot of headlines.
Not IPC and not per-Watt, but in absolute performance and that's all that matters for most desktop users.
AMD's objective is power-efficiency, and merely to be competitive in single-thread performance. That's what matters most, for servers and laptops.
I have no idea about what servers need but for laptops I'm pretty sure that standby time and lighter threaded things are way more important than battery time while doing 3d rendering.
Other than that laptop CPUs are locked to some TDP and can't use anything more so the only difference would be in absolute performance. (in that TDP)
https://www.computerbase.de/2020-05/intel-core-i9-10900k-i5-10600k-test/4/

They had to add 14 nm capacity because their 10 nm were delayed from being able to pick up capacity. Also, they had to increase die size to compete with AMD, which further crunched their capacity. Finally, the demand was strong for all CPU brands, due to global macro-econimic conditions. A rising tide lifts all boats, as they say.

If it merely sold well because Zen sold poorly, then AMD's revenues wouldn't have jumped. And let's not forget that 1st gen Zen was actually made on an inferior process node, even though it was nominally called 14 nm.
If you have any 10nm numbers from intel please let us all know.They have a bunch of 10nm products out,I have no idea what percentage it is so if you have numbers, tell us.
Intel made twice the money they did in 2016,I don't know if they made of of this from 14nm or how much of that is 10nm sales,I don't know if they could have made that much even if all their fabs they had before were all only 14nm.
If you have any numbers on that please tell us.
But what I know for sure is that intel made 20bil with ZEN around while they did only 10bil without ZEN around.

Primarily low-end, though even supply of server CPUs was tight for a while. Dude, you have a lot of opinions based on very little information. This is a news site, after all. If you're going to take such a counter-narrative position, you could do a better job of researching the subject, if you didn't actually follow the news in the first place.
That's really funny coming from somebody that only looked at the headlines and didn't ready any of the article.
From your first link:
"Those sources reportedly said that Intel's prioritization of processor shipments to companies like HP, Lenovo, Dell, and Apple have forced other laptop-makers to use AMD CPUs in their products. "
Intel is selling all of their product for top dollar to the top brands while everybody else is forced to scramble to find cheap alternatives to make ends meet" have forced other laptop-makers to use AMD CPUs in their products." .
That's not a shortage this is the definition of a luxury good.

In fact, at one point their shortage of Cascade Lake Xeons was so bad that they started steering server customers to Skylake Xeons and even EPYCs. That was taken from an official Intel memo to its sales channel partners. ...ah, found it:

Yeah and it's hurting intel soooo much....double the net income,
this is not a shortage this is a company producing twice as much product as previously.
YPTeKjm.jpg
 

bit_user

Polypheme
Ambassador
We have decades of previous behavior for intel and they always only did the bare minimum,switching to 10nm would not have been the bare minimum.
Okay, that's a "no". No sources. Unless we're counting your imagination as a source (which I don't).

Do you have any sources to back your side up, or is it just your impression of what happened?
This is a news site, and I actually read it! I included 12 links in the last post. I guess you didn't make it that far.

But what I know for sure is that intel made 20bil with ZEN around while they did only 10bil without ZEN around.
I have some excellent elephant repellent you might wish to buy. I know it works, because I haven't seen a single elephant while I've been using it.

...I also didn't see any elephants before, but that's beside the point.

That's really funny coming from somebody that only looked at the headlines and didn't ready any of the article.
From your first link:
I read all of those articles, when they were published. Did you even click more than one of them?

"Those sources reportedly said that Intel's prioritization of processor shipments to companies like HP, Lenovo, Dell, and Apple have forced other laptop-makers to use AMD CPUs in their products. "
Intel is selling all of their product for top dollar to the top brands while everybody else is forced to scramble to find cheap alternatives to make ends meet" have forced other laptop-makers to use AMD CPUs in their products." .
That's not a shortage this is the definition of a luxury good.
Intel's business model is not to artificially-restrict supply. If they could've satisfied demand from other customers, they'd have made even more money.

Yeah and it's hurting intel soooo much....double the net income,
this is not a shortage this is a company producing twice as much product as previously.
You misunderstand the point, which is that by failing to satisfy all of the market demand for their CPUs, Intel left an opening for AMD to get a toe-hold. And, more recently, by Intel's failure to get better-performing, higher-power 10 nm CPUs on the market, AMD is really starting to gain momentum!

I'm afraid there's not enough room inside your reality-distortion field for both of us.
 
Yeah and it's hurting intel soooo much....double the net income, this is not a shortage this is a company producing twice as much product as previously.
YPTeKjm.jpg
Income is not the same thing as being the better technology, or not having technical problems with a product. AMD was objectively the better CPU for several years back in 2002-2006, and made some gains, but Intel still made a ton of money. AMD is currently objectively a better CPU in many areas. Talking about finances is fine for investors, but it doesn't actually mean AMD is worse or Intel is better. You're just throwing up red herring facts.

Everything that has happened with Intel's 10nm over the past four years proves there have been numerous problems. Yes, things are better now, but 10nm certainly isn't where Intel wants it to be. If 10nm had worked out properly, Intel would absolutely not have built more 14nm fabs -- they would have been 10nm instead.

Imagine a car company makes a popular vehicle -- let's just use the Toyota Camry as as example. Every year, Toyota releases a new model Camry. Now imagine in 2020, Toyota says, "Our 2019 model Camry is SO POPULAR that we're not going to do a new 2020 model. In fact, we're going to build more factories to make the 2019 Camry -- that's how good it is!" No one would believe for a second that Toyota was recycling an old design just because it was popular. And then if someone says, "Look -- Toyota profits are up!" a year later, that doesn't mean there wasn't a problem with the 2020 Camry design. And if Toyota does Camry 2019+ and Camry 2019++, and continues making money, that still says nothing about whether or not there were problems with the 2020 model.

If your goal is to prove that Intel isn't going to go bankrupt, fine. I don't think anyone here expects Intel to disappear any time soon. Intel's CPUs also aren't bad -- and they're also not perfect. They're better at some tasks, worse at others, and for a huge chunk of the population you could give them an AMD or Intel based PC and they wouldn't know or care about the difference in CPU, just so long as it worked well and was fast enough.
 
Intel's business model is not to artificially-restrict supply. If they could've satisfied demand from other customers, they'd have made even more money.
You previously mentioned how badly AMD handled the mining craze,why do you think intel should make the same mistakes?
CPU sales have doubled right now(or at least the money they make from it) and there is zero reason to think that this will keep on forever, they are not going to dump huge amounts of money on producing more CPUs because they just don't know how long this can go on.

Yeah all the rest of your post has degraded to just mud slinging so I won't respond to any of that.

Income is not the same thing as being the better technology, or not having technical problems with a product. AMD was objectively the better CPU for several years back in 2002-2006, and made some gains, but Intel still made a ton of money. AMD is currently objectively a better CPU in many areas. Talking about finances is fine for investors, but it doesn't actually mean AMD is worse or Intel is better. You're just throwing up red herring facts.
It doesn't matter which one is better or which has better technology.
A capitalistic company is measured by their capital.
Imagine a car company makes a popular vehicle -- let's just use the Toyota Camry as as example. Every year, Toyota releases a new model Camry. Now imagine in 2020, Toyota says, "Our 2019 model Camry is SO POPULAR that we're not going to do a new 2020 model. In fact, we're going to build more factories to make the 2019 Camry -- that's how good it is!" No one would believe for a second that Toyota was recycling an old design just because it was popular. And then if someone says, "Look -- Toyota profits are up!" a year later, that doesn't mean there wasn't a problem with the 2020 Camry design. And if Toyota does Camry 2019+ and Camry 2019++, and continues making money, that still says nothing about whether or not there were problems with the 2020 model.
If they make it 1m longer and add another bench of seats on the 2019 model Camry it's not the 2019 model anymore and if it sells just as well or in fact even twice as well then obviously it was a very good decision to create and sell it.
If your goal is to prove that Intel isn't going to go bankrupt, fine. I don't think anyone here expects Intel to disappear any time soon. Intel's CPUs also aren't bad -- and they're also not perfect. They're better at some tasks, worse at others, and for a huge chunk of the population you could give them an AMD or Intel based PC and they wouldn't know or care about the difference in CPU, just so long as it worked well and was fast enough.
"If your goal is to prove that Intel isn't going to go bankrupt, fine."
Lol there isn't even a sign of them slowing down,they made the same amount of money the last two years which might very well be their cap, ZEN has allowed intel to make a ton of money.
"They're better at some tasks, worse at others, "
AMD is selling server CPUs to fortnite gamers, of course they won't be making much money,they are completely miss marketing their products.
 

bit_user

Polypheme
Ambassador
You previously mentioned how badly AMD handled the mining craze,why do you think intel should make the same mistakes?
CPU demand isn't as volatile as the effect mining had on GPUs, and Intel doing the production in-house doesn't get stuck with as much inventory on its hands as when you're contracting out to do the manufacturing.

Clearly, Intel dragged its feet on building more 14 nm capacity, perhaps hoping the 10 nm problems would get sorted out, or maybe fearing the demand would dry up by the time they built it. So, there's some logic to what you're saying. However, that's different than your previous claim that they were simply restricting supply to drive up costs.

In other words, I can sort of agree with what you're saying now.

Yeah all the rest of your post has degraded to just mud slinging so I won't respond to any of that.
I'm not saying it's intentional, but you have a way of arguing broad points, without the facts to back them up which really frustrates people. If you feel a lot of negativity coming your way, it might be something to do with that.

In general, when you go against the tide of opinion or reach to make very broad claims, it would help if you can check your facts and maybe cite some references.

It doesn't matter which one is better or which has better technology.
A capitalistic company is measured by their capital.
Are you viewing them as an investor or a customer? What's good for one group is not necessarily good for the other. In fact, the interests of the two groups are quite at odds.

Investors want Intel to lock up the market by any means necessary, so they can cut R&D budgets and deliver more expensive chips year after year that are cheaper to make and offer minimal benefits to customers. That's how you maximize profits.

Customers want cheap products with real improvements and innovation. Innovation costs money, which investors don't like.

So, financial results can be misleading. They also don't answer the counter-factuals, such as what if Intel had succeeded in ramping up 10 nm, when they originally planned?

if it sells just as well or in fact even twice as well then obviously it was a very good decision to create and sell it.
That depends on why it sells that well, and whether their original plans would've resulted in even greater sales.

There's a problem with Jarred's car analogy, which is that Intel had to make CPUs bigger, in order to compete with Ryzen and continue offering value over its existing 14 nm CPUs. This reduced the number of chips per wafer, which is exactly the opposite of what you want, in a supply crunch. So, sticking with 14 nm was a very costly decision, as opposed to the example of just continuing to build the same old car as last year.

Lol there isn't even a sign of them slowing down,
At the beginning of the year, the VP heading their data center group said he wanted to keep their market share losses down to the low double-digit %. I forget if he explicitly referenced AMD, or if he was also worried about other factors.

ZEN has allowed intel to make a ton of money.
How do you figure? How can you know that they wouldn't have made even more money, without Zen in the picture?

AMD is selling server CPUs to fortnite gamers, of course they won't be making much money,they are completely miss marketing their products.
What should they be doing differently?
 

agello24

Distinguished
Feb 8, 2012
136
2
18,715
when was the last time they updated the benchmarks to run on AMD RYZEN cpu's? the intel chips are the same since the core 2 duo. nothing changed really but the pins
 

InvalidError

Titan
Moderator
when was the last time they updated the benchmarks to run on AMD RYZEN cpu's? the intel chips are the same since the core 2 duo. nothing changed really but the pins
The architecture changed quite a bit from Core 2 to current-day, some of the most notable ones being:
  • integrated PCIe for GPU (and now one NVMe)
  • integrated memory controller
  • much deeper re-order buffers (96 for Core 2, 192 for Haswell, 224 for Skylake and 320 for Ice Lake)
  • more accurate and deeper branch predictions (main reason it needs such a deep re-order queue)
  • the scheduler went up from five execution ports to 10
  • i-series has SMT, Core 2 does not

Modern cores are monsters compared to Core2.