News Intel Gen12 Xe Graphics Leads AMD's Vega in Integrated GPU Performance

mikewinddale

Distinguished
Dec 22, 2016
290
55
18,940
I think AMD still overall wins here.

The AMD is only barely slower in graphics but massively faster in CPU. So overall, the AMD is faster.

Furthermore, it's easier to upgrade a graphics card than a CPU. Mechanically, adding a graphics card is easier than removing and replacing a CPU and its heatsink. Also, in terms of future compatibility, pretty much any PCIe x16 card in the future will be a suitable upgrade, while CPUs are much more limited.

So if I were going for the overall fastest system with the most flexible upgrade path, with the fewest compromises, I'd still go with AMD.
 

cyrusfox

Distinguished
I think AMD still overall wins here.

The AMD is only barely slower in graphics but massively faster in CPU. So overall, the AMD is faster.

Furthermore, it's easier to upgrade a graphics card than a CPU. Mechanically, adding a graphics card is easier than removing and replacing a CPU and its heatsink. Also, in terms of future compatibility, pretty much any PCIe x16 card in the future will be a suitable upgrade, while CPUs are much more limited.

So if I were going for the overall fastest system with the most flexible upgrade path, with the fewest compromises, I'd still go with AMD.
Neither one is upgradeable, these are both SOC mobile chips...
You could potentially augement with a discrete GPU on either, but you will still need heat pipes to it to cool.

It is exciting to see 8 cores on a laptop chip, I am not convinced it is overkill, but having 8 core cpu with only integrated GPU, a little bit of an unbalanced match up. We will see how the market responds to AMD Ryzen 4000 mobile offerings.
 
  • Like
Reactions: JarredWaltonGPU
Jun 9, 2020
1
0
10
CPU is designed to handle a wide-range of tasks quickly (as measured by CPU clock speed), but are limited in the concurrency of tasks that can be running. A GPU is designed to quickly render high-resolution images and video concurrently. We all need images/ video (GPU) so INTC wins over AMD.
 

JayNor

Reputable
May 31, 2019
429
86
4,760
Is there confirmation for an 8 core tiger lake h? Seems like that would be a more appropriate chip for the comparison.

"Then there's the high-performance Tiger Lake-H lineup that would consist of up to 8 core and 16 thread chips based on the new Willow Cove architecture. The CPUs would carry up to 34 MB of cache that's 24 MB L3 (3 MB L3 per core) and 10 MB L2 (1.25 MB per core)."

 

ChaosFenix

Commendable
Sep 20, 2019
8
4
1,515
So I don't think AMD has much to fear here.

1. They are running on an older GCN architecture that will hopefully be replaced soon with RDNA or even RDNA 2 which each generation will be providing 50% increase in perf/watt per generation. So you are comparing Intel's latest graphics capabilities with something that while good is couple generations behind at this point.

2.Intel is using 50% more shaders here to eek out a 5% win. From the sounds of it AMD adding a single CU would give it a 12% boost and take the crown back. If they matched Intel on shaders they would be winning by about 40%.

3. We have no numbers on power consumption which matters a lot in this segment. Intel could be running this thing at 12W which would make this more impressive or it could be running it here at 25W which would make this a non issue. Until we know like for like this just isn't much to go off.

I am not saying that AMD can rest easy here as that is what got Intel into trouble but AMD already has things in its roadmap that will make this a non issue by the time it actually launches. If they brought it back up to 10-11CUs like the 3700u or 3400G, and put in RDNA next gen they would easilly be beating it by ~70%. If they could put in RDNA2 could could expect 2x+. I actually would love to see something like a Ryzen 9 5900u with 8c16t and 11CUs using RDNA2 at 1600Mhz+. You could do some decent AAA gaming with that.
 
  • Like
Reactions: bit_user

gruffi

Distinguished
Jun 26, 2009
38
25
18,535
Actually both scores, CPU and GPU, look quite underwhelming. But this also depends on the actual clocks and power consumption. Ryzen 3100 with 4C/8T and much lower boost clocks scores ~5000 CPU. Sure, this is a 65W desktop part. But it could easily fit into 15W with Vega8 if designed as mobile processor. Leave alone mobile Zen 3 that will probably be launched 1H next year.

And just ~4% more GPU performance than the old Vega? This is no real progress for Intel. Two problems. First, Renoir never was designed to break iGPU records. AMD took Vega because it was cheap to integrate and enabled 8 powerful Zen 2 cores within 15W. RDNA 2 based mobile processors will offer a complete different level of performance. And second, Intel never looked that bad in synthetic benchmarks. But real world performance often is much worse. I guess that 4% advantage in Time Spy turn into 10-20% disadvantage in real games.
 

deksman

Distinguished
Aug 29, 2011
233
19
18,685
I think AMD still overall wins here.

The AMD is only barely slower in graphics but massively faster in CPU. So overall, the AMD is faster.

Furthermore, it's easier to upgrade a graphics card than a CPU. Mechanically, adding a graphics card is easier than removing and replacing a CPU and its heatsink. Also, in terms of future compatibility, pretty much any PCIe x16 card in the future will be a suitable upgrade, while CPUs are much more limited.

So if I were going for the overall fastest system with the most flexible upgrade path, with the fewest compromises, I'd still go with AMD.

Historically also Intel may have had certain benchmark scores, but those were not reflected in actual gaming (software which uses graphics in general) software, and AMD was consistently faster.

Even with 4.4% lead on Intel's end, that's just above margin of error, and I suspect AMD would STILL come out faster in actual games (but we'll see).

Although yes, I agree with the sentiment that AMD is faster (and a much better offering) overall.

Plus, with the article claiming the tides have somehow turned and AMD offering more powerful CPU and weaker GPU doesn't exactly hold ground since Intel iGP's have been lagging behind AMD's iGP's by 30% or more... the difference here is 4.4% in Intel favor (and we don't know if this will even be reflected in actual games).
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
It's hard to scale beyond 4x CPU without transactional memory. Lock contention kills you. I wonder if Intel finally managed to fix TSX with Tiger Lake.
 

watzupken

Reputable
Mar 16, 2020
1,019
515
6,070
While its interesting to see leaks and such, I think there are too many leaks about how Tiger Lake U and nothing shared about the underlying specs and testing conditions. CPU side, Tiger Lake will still be at a disadvantage because no matter how fast Intel can try to "boost" their chip clockspeed, they simply cannot catch up the core deficit in general applications. Also higher clockspeed translates into higher power consumption and heat, which is capped for a mobile device/ laptop.

GPU wise, I am more interested to see if they can pull ahead of Vega 8 in actual games. Current Ice Lake GPUs shows promising improvement in benchmarks, but less impressive improvements in games.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Neither one is upgradeable, these are both SOC mobile chips...
The point is still valid, because people who really care about GPU performance are going to buy a laptop with a separate GPU, anyhow.

So, AMD only needs to be comparable to Intel on iGPU performance. I do think they're probably regretting trimming back the number of CUs, but if it had to be that or loosing a couple CPU cores, I think they made the right choice.
 
  • Like
Reactions: RodroX

bit_user

Polypheme
Ambassador
CPU is designed to handle a wide-range of tasks quickly (as measured by CPU clock speed), but are limited in the concurrency of tasks that can be running. A GPU is designed to quickly render high-resolution images and video concurrently. We all need images/ video (GPU) so INTC wins over AMD.
By that logic, AMD APUs have been winning over Intel for the past decade, which they clearly haven't.

No, who wins or loses must also take into account things like battery life, more benchmarks & testing, and pricing. Ultimately, the market will decide.
 

bit_user

Polypheme
Ambassador
Is there confirmation for an 8 core tiger lake h? Seems like that would be a more appropriate chip for the comparison.
Depends mainly on other factors, like pricing and power consumption.

If AMD can dial down pricing and power usage to undercut Intel's 8-core models, then this might be what it'll actually compete with, in the market.
 

bit_user

Polypheme
Ambassador
If they brought it back up to 10-11CUs like the 3700u or 3400G, and put in RDNA next gen they would easilly be beating it by ~70%. If they could put in RDNA2 could could expect 2x+. I actually would love to see something like a Ryzen 9 5900u with 8c16t and 11CUs using RDNA2 at 1600Mhz+. You could do some decent AAA gaming with that.
APUs tend to be very constrained by memory bandwidth. My guess about their reason for offering only Vega 8 is that they found the additional CUs added very little real performance.

They can and should definitely bring RDNA/RDNA2 to their APUs (it's a shame they didn't, already). But, short of adding HBM2 - like they did for Kaby Lake-G - I don't foresee any iGPU supporting AAA gaming. Of course, that wasn't even really an iGPU.
 

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
2.Intel is using 50% more shaders here to eek out a 5% win. From the sounds of it AMD adding a single CU would give it a 12% boost and take the crown back. If they matched Intel on shaders they would be winning by about 40%.
I agree that AMD would win ith just one more CU, but the numbers are probably a bit off since it doesn't scale in a linear fashion.

And just ~4% more GPU performance than the old Vega? This is no real progress for Intel. ...
Compared to how Intel's current IGPs perform it's a huge leap!

By that logic, AMD APUs have been winning over Intel...
... who wins or loses must also take into account things like battery life, more benchmarks & testing, and pricing. Ultimately, the market will decide.
One important reason Intel is so big on laptops is their marketing grip on the manufacturers.
I suspect that's still a factor, given Asus' designs of the latest Ryzen based gaming laptops. (Totally inferior cooling that reduce the computing performance!) A manufacturer that makes Ryzen laptops without built-in flaws probably won't get any money from Intel.
 

bit_user

Polypheme
Ambassador
I suspect that's still a factor, given Asus' designs of the latest Ryzen based gaming laptops. (Totally inferior cooling that reduce the computing performance!) A manufacturer that makes Ryzen laptops without built-in flaws probably won't get any money from Intel.
It'd be nice if this were based on some actual reporting. I suspect what happens is they sell the Ryzen models down-market, and simply shave costs by under-cooling them.

What will be interesting to see is that still happens with AMD's top-end APUs. Now that they're delivering market-leading CPU performance, they might get positioned as a premium product, rather than a value model.
 

bit_user

Polypheme
Ambassador
It's hard to scale beyond 4x CPU without transactional memory. Lock contention kills you.
Wow, so just toss those 64-core Threadrippers and EPYCs in the trash, then!

No, whether lock-contention is a problem totally depends on what you're doing!

However, if you think TSX is going to solve it, then I fear you don't understand the real problem with lock contention. The reason lock contention is so bad is that it forces costly context-switches, and TSX doesn't eliminate that.

TSX is only good for optimizing the case where the lock is available (i.e. not in contention). That's still useful, in lock-intensive code, but not a magic bullet for solving lock contention. Interestingly enough, the most stark example I've seen is actually a PS3 emulator.

I wonder if Intel finally managed to fix TSX with Tiger Lake.
Yeah, I remember Intel making noise about it, in Haswell, and then they had to turn around and disable it. Then, it's like Skylake(?) that was supposed to finally have fixed it, but did they later have to disable that, as well?
 

bit_user

Polypheme
Ambassador
We don't even know the TDP of the Intel part! When we are talking laptop APU's TDP is everything. If its a 45 watt part then Intel is actually loosing to AMD. lets wait and see before we claim an Intel victory over some random leak.
If the rumors of 8-core parts are true, then it's definitely not a 45-watt. Perhaps it's running at 25 W, though.