News We benchmarked Intel's Lunar Lake GPU with Core Ultra 9 — drivers still holding back Arc Graphics 140V performance

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Pierce2623

Prominent
Dec 3, 2023
486
368
560
You were literally a minute too late on posting this. It has already been fixed. I put in the wrong shader count, because I got confused on the various Intel architectures. MTL is 128 shaders per Xe-core, while LNL is 64 shaders per Xe-core but each one is twice as potent (SIMD16 instead of SIMD8).
Ahh fair enough. I kind of assumed it happened from computing it on your own rather than quoting than quoting the marketing materials. Since that’s why it happened, it’s absolutely a forgivable mistake.
 
  • Like
Reactions: JarredWaltonGPU

Pierce2623

Prominent
Dec 3, 2023
486
368
560
I notice Lunar Lake GPU doesn't increase core count while having 30% uplift versus AMD that push for 50% more CUs while having only 20% uplift. This is similar to how Apple use transistor budget on their CPU pipeline which becomes wider and wider each year, having performance gain without increasing number of cores.

There certainly are frictions between desktop GPU that have all the memory and power they want versus iGPU, more CUs just doesn't make sense here.
No. That’s not what happened. The problem is they’re just both bandwidth constrained to insane levels. They’re both just extracting as much gaming performance as you can from a CPU/GPU combination operating on a memory subsystem that doesn’t produce enough bandwidth for the GPU by itself.
 

Eximo

Titan
Ambassador
I'm quite impressed by the performance, I was running an Nvidia 1070 8GB until January this year and Cyberpunk was one of my favourite games of the last few years, so I have a lot of experience with how it runs after several playthroughs.
To see an iGPU matching the same FPS and settings (1080p medium, Quality upscaling at around 50fps) as my dedicated 1070 is surprising and impressive; maybe we will eventually reach a point in the next decade where low-end GPU's are overtaken and made redundant by integrated GPUs built into CPUs

That kind of already happened. Low end discrete GPUs haven't really made much of an appearance in the last few generations. Those that do exist, pretty much end up in the mobile segment. Nvidia's MX line up products for example. Tends to be made on previous nodes and architectures. RTX3050 6GB is the 'fastest' low end GPU, based on a previous generation. AMDs 6500 XT is barely a GPU, but I suppose counts. Still no 7000 series replacement. No 30/40 class cards from Nvidia since the GT1030.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
That kind of already happened. Low end discrete GPUs haven't really made much of an appearance in the last few generations. Those that do exist, pretty much end up in the mobile segment. Nvidia's MX line up products for example. Tends to be made on previous nodes and architectures. RTX3050 6GB is the 'fastest' low end GPU, based on a previous generation. AMDs 6500 XT is barely a GPU, but I suppose counts. Still no 7000 series replacement. No 30/40 class cards from Nvidia since the GT1030.
We are still way way far away from that. I have a 6700s on my laptop, which is basically a 6600 restricted to 80 watts, it's still more than 3 times as fast as the integrated 680m maxed out.
 

Eximo

Titan
Ambassador
We are still way way far away from that. I have a 6700s on my laptop, which is basically a 6600 restricted to 80 watts, it's still more than 3 times as fast as the integrated 680m maxed out.
Kind of the point I was making. The smallest discrete mobile GPUs are much larger than integrated.

You need only look at something like the Steamdeck to see where things are headed for the average gaming consumer.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Kind of the point I was making. The smallest discrete mobile GPUs are much larger than integrated.

You need only look at something like the Steamdeck to see where things are headed for the average gaming consumer.
I get the steam deck argument but that won't work for a laptop let alone for a desktop. Much bigger screens need a much larger resolution and that's where the limited bandwidth of the igpus kinda gives up.
 

Eximo

Titan
Ambassador
I get the steam deck argument but that won't work for a laptop let alone for a desktop. Much bigger screens need a much larger resolution and that's where the limited bandwidth of the igpus kinda gives up.

Steamdeck is only Zen2 / RDNA2 for 1280x800 (8 CUs) to keep costs down.

Z1 Extreme in the ROG Ally is Zen4/RDNA3 and has a 1080p display. (That only bumps it up to 12 CUs) Which is the 780m

Z2 Extreme looks to be keeping the 12CU count, though I don't know why it would end up with less than the 890M, might be a cost thing.

Still, 16CUs of RDNA 3.5 is really quite a lot of mobile performance. Half the size of an RX7600. If they release a 9700G with that in it, easily obsoletes the 6500XT and anything like it.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Steamdeck is only Zen2 / RDNA2 for 1280x800 (8 CUs) to keep costs down.

Z1 Extreme in the ROG Ally is Zen4/RDNA3 and has a 1080p display. (That only bumps it up to 12 CUs) Which is the 780m

Z2 Extreme looks to be keeping the 12CU count, though I don't know why it would end up with less than the 890M, might be a cost thing.

Still, 16CUs of RDNA 3.5 is really quite a lot of mobile performance. Half the size of an RX7600. If they release a 9700G with that in it, easily obsoletes the 6500XT and anything like it.
I agree but the problem here is you are comparing it to the 6500xt. That's not your normal pre covid / mining 200$ card. The reason igpus can replace low end gpus isn't because igpus have become relatively faster, it's because low end gpus have become much much slower.


If I remember correctly a 1060 was half the performance of the titan x pascal. Nowadays a 4060 is like what, 1 / 3rd of the 4090?
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Just tested SOTTR , 1920x1200p, medium preset. The igpu was maxed out at 35w, so I just tested the dGPU at different power levels. Even at just 55w the 6700x is exactly 3x faster, and it can scale up to 90-100w (120w on other laptops).

680m - 35w soc = 31 fps
6700s - 25w (35w total) = 42 fps
6700s - 35w (50w total) = 69 fps
6700s - 55w (70w total) = 93 fps


And just for the comparison with lunar lake and hx370, at 720p medium I get 57 fps, so hx370 is 22% faster than the 680m and lunar lake is 29% faster.
 

Eximo

Titan
Ambassador
I agree but the problem here is you are comparing it to the 6500xt. That's not your normal pre covid / mining 200$ card. The reason igpus can replace low end gpus isn't because igpus have become relatively faster, it's because low end gpus have become much much slower.


If I remember correctly a 1060 was half the performance of the titan x pascal. Nowadays a 4060 is like what, 1 / 3rd of the 4090?

You could make that argument, I suppose. Contemporary reviews put the 1060 at roughly half the performance of a 1080Ti/Titan in most titles. But that might be partially due to the exactly 1/2 bus width, and less so the CUDA cores. It seems that in some titles it was closer to that 1/3, also could have been the system bottlenecks of the time.

Cost or GPU size, though. GPUs got bigger, but they also got more expensive. The low end GPUs are still hitting the same rough price target, the top end went wild.

GTX 1060 $300 (1280 cores)
GTX 1080 Ti $700 (3584 cores) About 3:1 cores, vs 4:7 cost.
Titan X Pascal at $1200 was silly even by 4090 pricing standards, wasn't worth it. Unless you meant the later released Titan Xp, which did have the full core count at 3840, even then the gains just weren't justifiable.
RTX 4060 $300 (3072 cores)
RTX 4070 $600 (5888 cores) About 2:1 cores and 2:1 cost.
RTX 4080 $1200 (9728 cores) About 3:1 cores vs 4:1 cost. (Excepting the 4080 Super, since it was much later)
RTX 4090 $1600 (16384 cores) About 5:1 cores vs about 5:1 cost (Annoyingly the most cost effective, but they knew what they were doing with that)

Then it comes down to performance scaling, which is tricky. Generally wouldn't run a 4090 at 1080p and you can't really run a 4060 at 4K. And when it comes to DLSS and Ray Tracing, well, that just changes everything. I think you are right that on the average rasterization the low end has dropped closer to 1/3 the FPS output at the same settings, but the cost has gone up lot more in proportion.
 

TRENDING THREADS