News Gen11 Iris Plus Tested: Could Tiger Lake and Xe Double Performance?

Sure, some of that is TDP, but Vega 11 (3400G) stretches the lead to 67%. Clearly it's not just a memory bandwidth bottleneck holding Iris Plus back, especially since the Razer laptop has more bandwidth than the Ryzen APUs.

Pretty much ALL is due to TDP. The 1065G7's Iris Plus graphics is just as fast as mobile Vega in Picasso.

The Willow Cove CPU architecture is different as well, but for mobile solutions, it looks like Tiger Lake could use about 70% more transistors and die area on the GPU than the CPU.

Incorrect. The Xe iGPU in Tiger Lake is barely larger than one in Icelake. The CPU gets quite a bit larger though with the caches. All in all in Tigerlake the 4 CPU cores and its caches are roughly equal to the size of the 96EU Xe GPU.
 
Pretty much ALL is due to TDP. The 1065G7's Iris Plus graphics is just as fast as mobile Vega in Picasso.

I'm not really sure its completely due to TDP. Iris Plus is nice for decent graphics at a lower power usage, but I'm pretty sure its efficiency optimized, not built for higher TDP. Even if you pushed the same power to the 1065G7's graphics as what is available to the Vega 11 graphics, it probably wouldn't scale that much higher. I'm sure Intel could have released a higher TDP optimized Ice Lake with this gen, but decided against it. I'm also not sure where you are getting that Iris Plus graphics is just as fast a mobile Vega when Tom's tests here clearly show the contrary.
 
  • Like
Reactions: JarredWaltonGPU
I'm not really sure its completely due to TDP. Iris Plus is nice for decent graphics at a lower power usage, but I'm pretty sure its efficiency optimized, not built for higher TDP. Even if you pushed the same power to the 1065G7's graphics as what is available to the Vega 11 graphics, it probably wouldn't scale that much higher. I'm sure Intel could have released a higher TDP optimized Ice Lake with this gen, but decided against it. I'm also not sure where you are getting that Iris Plus graphics is just as fast a mobile Vega when Tom's tests here clearly show the contrary.
I didn't test mobile Vega chips, though. A 35W Ryzen 7 3750H with Vega 10 Graphics isn't going to be as fast as a 65W Ryzen 5 3400G with Vega 11 Graphics. I don't think it would be that much slower, however -- probably 15-20%.

But you're definitely correct that Ice Lake and Gen11 wouldn't continue to increase in performance with higher TDPs. Going from 15W to 25W is a 67% increase in power -- and at the outlet I saw power use go from ~30W to ~45W -- and performance improved by 35% on average. If the TDP were raised to 65W, unless the GPU clocks could scale much higher than 1100 MHz, performance and power would max out at some point well below the 65W limit.

As for the relative sizes of GPU and CPU ... I need to correct that. I've looked at some die shots now, and it does appear that Xe Graphics is far more compact than Gen11. Interesting. It's about the same size for the CPU cores and GPU cores in TGL.
 
I didn't test mobile Vega chips, though. A 35W Ryzen 7 3750H with Vega 10 Graphics isn't going to be as fast as a 65W Ryzen 5 3400G with Vega 11 Graphics. I don't think it would be that much slower, however -- probably 15-20%.
Oops. I don't know why I jumped to associated Vega 11 with the mobile Vega chips. Just thinking that mobile vega especially now in the 4000 series mobile chips are just as fast if not faster than the 3000 series mobile chips.
 
I'm not really sure its completely due to TDP. Iris Plus is nice for decent graphics at a lower power usage, but I'm pretty sure its efficiency optimized, not built for higher TDP.

Not talking about the Iris scaling, but it'll likely perform better. Even the UHD 630 is faster than the mobile UHD 620.

I was talking about Vega.
https://www.techspot.com/review/2003-amd-ryzen-4000/

You can see that the 3750H is quite a bit faster than the 3700U. When you reach playable frame rates, not only the GPU matters, but the faster CPU starts to come into play. Also with the higher TDP headroom it avoids scenarios where some games perform poorly due to either the CPU or the GPU hogging too much of the power.

Thanks @JarredWaltonGPU
 
Last edited:
It really puzzles me that intel never made the iris Graphics a standard in the desktop market CPU's ....
The Iris chips are pretty expensive, and anyone wanting good-performance graphics in a PC can easily do better by using a non-Iris CPU and popping in a $100 dGPU. Same or less $$$.

Where Iris makes a lot of sense is in thin & light laptops.

BTW, Intel did sell some NUCs with Iris graphics. I've seen a Broadwell i7 NUC with 48 EUs.
 
I've looked at some die shots now, and it does appear that Xe Graphics is far more compact than Gen11. Interesting.
It is interesting. I know Gen12 dropped register scoreboarding, but that shouldn't save much die area. I know they also cut back on fp64, but I thought Gen11 already had that.


It's about the same size for the CPU cores and GPU cores in TGL.
The CPU cores in TGL are a new uArch, and therefore probably bigger than ICL. I think they also have some new AVX-512 instructions, for deep learning.

Mabye it's more a case of TGL cores getting bigger - not Gen12 EUs getting smaller.
 
  • Like
Reactions: JarredWaltonGPU
It is interesting. I know Gen12 dropped register scoreboarding, but that shouldn't save much die area. I know they also cut back on fp64, but I thought Gen11 already had that.

The CPU cores in TGL are a new uArch, and therefore probably bigger than ICL. I think they also have some new AVX-512 instructions, for deep learning.

Mabye it's more a case of TGL cores getting bigger - not Gen12 EUs getting smaller.
Yeah, I expected the Willow Cove cores to be larger than the Sunny Cove cores. But with 50% more EUs, I really expected Xe in TGL to be a large chunk of the die area. On ICL, the GPU is clearly larger than the CPU section. It certainly makes me wonder just how much wasted 'junk' was present in the Gen11 (and earlier) graphics that ended up getting cut / optimized for Xe.
 
  • Like
Reactions: bit_user
The Iris chips are pretty expensive, and anyone wanting good-performance graphics in a PC can easily do better by using a non-Iris CPU and popping in a $100 dGPU. Same or less $$$.

Where Iris makes a lot of sense is in thin & light laptops.

BTW, Intel did sell some NUCs with Iris graphics. I've seen a Broadwell i7 NUC with 48 EUs.

First , Iris is not that expensive as you make it look like , it is no way near the $100 dGPU as you said , it is roughly some $30 increase in the CPU Price.

Actually onboard ARM GPU are becoming faster than stupid intel non Iris graphics by DOUBLE and they are much cheaper than any intel offerings.
 
First , Iris is not that expensive as you make it look like , it is no way near the $100 dGPU as you said , it is roughly some $30 increase in the CPU Price.
I'm just going by the list price, which is $426 for this CPU:

https://ark.intel.com/content/www/u...1065g7-processor-8m-cache-up-to-3-90-ghz.html

For a quad-core CPU, that's really expensive. Especially considering the CPU part isn't even as fast as the quad-core Comet Lake models they released for laptops.


Actually onboard ARM GPU are becoming faster than stupid intel non Iris graphics by DOUBLE and they are much cheaper than any intel offerings.
Source?