News We benchmarked Intel's Lunar Lake GPU with Core Ultra 9 — drivers still holding back Arc Graphics 140V performance

Admin

Administrator
Staff member
  • Like
Reactions: iLoveThe80s

Gururu

Upstanding
Jan 4, 2024
280
181
370
This is a good examination given the interest on Battlemage performance. The fact that Alchemist beats it raises a bit of skepticism for the value of a comparison now. Time will tell (hopefully soon!) but I can't help but think that at this time the observed performance is intended for whatever reason.
 
Sep 24, 2024
1
1
10
Making this comparison based on claimed TDP instead of actual consumption is utterly ridiculous. HX 370 at performance is 54W + Ram vs 30W of LL PL2 max including ram.

Complete different chips which doesn't make sense to compare one to another

PS: Intel drivers still need work, we can agree on that
 
  • Like
Reactions: artk2219
Making this comparison based on claimed TDP instead of actual consumption is utterly ridiculous. HX 370 at performance is 54W + Ram vs 30W of LL PL2 max including ram.

Complete different chips which doesn't make sense to compare one to another

PS: Intel drivers still need work, we can agree on that
It's why we clearly explain what we're testing and what it can mean. Yes, Strix Point has a different chassis in this case. But this is the only Strix Point laptop we currently have access to. It's still interesting that AMD doesn't wipe the competition in this case, indicating bottlenecks sometimes lie elsewhere (like with VRAM bandwidth).

We did not have the option to choose from all laptops available at retail and pick the ones we wanted to test. Instead, we have the AMD and Intel provided samples for Meteor Lake, Lunar Lake, and Strix Point. But the Intel units are both effectively the same chassis, with similar power limits, and so those numbers are quite telling.
 

DS426

Upstanding
May 15, 2024
196
169
260
It's good to grab these baselines on Lunar Lake to see what performance gains will come about in these games in 6 mo, 12 mo, etc. from now. Again though, these aren't purist gaming laptops, so gaming performance has lower relative importance on these models than battery life and productivity & AI apps. I'm already seeing some crazy battery life numbers out there on the internetz.

Interesting to see where the added memory bandwidth helped slingshot LNL forward. I'd love to see some good tests that illustrate how much AMD's Radeon 890M can be bottlenecked by DDR5.

This wasn't a pointless comparison -- Jarred laid out the caveats. Many more tests across various OEM's, models, benchmarkers (tech journalists, etc.), and benchmarks will eventually reveal how different (or not) Lunar Lake really is.

Surprised it ran Black Myth: Wukong on day 1! Battlemage already looks to be way more stable than Arc was at launch. Sure, kind of to be expected but you also never know, lol.
 
Last edited:
  • Like
Reactions: artk2219

Elusive Ruse

Respectable
Nov 17, 2022
416
543
2,220
Intel has had long enough to develop proper drivers, it is time we stop giving them the benefit of the doubt and treat them like some kind of industry newbies whom the consumer should act as beta testers or bankroll their venture into the GPU market.
Intel is launching yet another product with trash drivers that’s what this story should be about.
 
  • Like
Reactions: Bikki and artk2219
I'm not sure why you would test using FSR + Frame Generation when all this will do is expose the lack of compute power on higher threaded titles. Between that and the higher power draw I'm not sure what useful conclusions can be made here. "Driver issues" certainly isn't exposed here nor is any useful performance information. I get the whole "test with what you've got" but you can't just ignore the significant differences, then test in a way that removes load off the GPU and make GPU conclusions based on said testing.

Also too many years in the graphics game as almost every instance of 890m says 980m instead.
 

rluker5

Distinguished
Jun 23, 2014
842
545
19,760
I wonder if Lunar Lake has enough XMX to do a decent job of XeSS?
XeSS gives more fps than FSR with both at balanced in CP2077 on my A750 and it is well known that XeSS looks better than any FSR from an equivalent time.
I've seen other LNL reviews using XeSS so maybe?

Also that Lossless Scaling program can interpolate up to the vsync refresh rate limit nowadays. Even 4x if needed, but the game feels best if your base framerate is close to 60. It really degrades into trash if you try to turn 15 fps into 60. 50-60ish into 120 is pretty decent though. So long as you turn off motion blur.
If you had that you would have no reason to use that grainy mess that is low res FSR upscaling with AMD interpolation.

But as for a performance comparison I agree with thestryker that adding interpolation on power limited mobile parts is only muddying the water. It is muddy enough with upscaling already. Imagine if some reviewer of Lunar Lake got just the Intel chip to triple their frames because they wanted to use Lossless Scaling with XeSS instead of the AMD favoring FSR combo because it looks so bad at low resolutions. All of those extra frames wouldn't mean that Intel was that much faster. Readers could be mislead.
 
  • Like
Reactions: cyrusfox
I've added this note to the article, FYI:

-----------
We are aware of concerns that using FSR3 upscaling and framegen could negatively impact Intel GPUs. We are in the process of conducting additional tests at 720p native and 1080p with FSR and/or XeSS — but without framegen — to provide a more complete picture of performance.
-----------

It's been a bit of a time crunch, and we'll hopefully be able to do some additional testing in the coming weeks in a variety of other games. I initially thought, "Yeah, let's enable FSR3 upscaling and framegen because it works on all GPUs and it's basically needed to get decent 1080p performance in some of these games." In retrospect, it's just one more variable that's best to eliminate — not that I think it's bad to test it, but it's also important to characterize performance without framegen/FSR as well. So that's what we're doing.
 
  • Like
Reactions: Bikki and cyrusfox
I'm not sure why you would test using FSR + Frame Generation when all this will do is expose the lack of compute power on higher threaded titles. Between that and the higher power draw I'm not sure what useful conclusions can be made here. "Driver issues" certainly isn't exposed here nor is any useful performance information. I get the whole "test with what you've got" but you can't just ignore the significant differences, then test in a way that removes load off the GPU and make GPU conclusions based on said testing.

Also too many years in the graphics game as almost every instance of 890m says 980m instead.
Sigh... 890M, 980M... the latter seemed to make sense in my brain while writing! LOL. I've fixed all notes about the wrong non-existent Radeon 980M now. Thanks.

I also screwed up on the Intel TFLOPS initially, which were both half of what they should have been.
 
  • Like
Reactions: Sluggotg

Pierce2623

Prominent
Dec 3, 2023
405
292
560
How are you claiming the Intel parts only have 2.1TFlops and 2.3Tflops of GPU compute? That’s literally half the numbers everyone else is giving. Then you go on to say they match a chip with 6TFlop of GPU compute.
 
How are you claiming the Intel parts only have 2.1TFlops and 2.3Tflops of GPU compute? That’s literally half the numbers everyone else is giving. Then you go on to say they match a chip with 6TFlop of GPU compute.
You were literally a minute too late on posting this. It has already been fixed. I put in the wrong shader count, because I got confused on the various Intel architectures. MTL is 128 shaders per Xe-core, while LNL is 64 shaders per Xe-core but each one is twice as potent (SIMD16 instead of SIMD8).
 

Mama Changa

Great
Sep 4, 2024
45
27
60
This is a good examination given the interest on Battlemage performance. The fact that Alchemist beats it raises a bit of skepticism for the value of a comparison now. Time will tell (hopefully soon!) but I can't help but think that at this time the observed performance is intended for whatever reason.
Hardware Canucks tested Lunar Lake against competitors and limited power to as close to 30W as possible. Xe2 performed much better in their tests than Xe in general and beat 880M in many games.
 
For those following, we have updated all the gaming test results, supplementing with 720p native, 1080p upscaled (XeSS and FSR3 where applicable), and finally the original 1080p FSR3+framegen.

Turns out framegen basically didn't work on Lunar Lake. We didn't know, because we didn't test without framegen (initially). But Meteor Lake gets about 50% higher perf from framegen, Strix Point got about 60% higher, and Lunar Lake... was the same in CP77 and 17% faster in Black Myth. 🤷‍♂️

FSR3 framegen should just run as a GPU compute process. It runs on everything from GTX 10-series and Polaris/Vega GPUs up through modern stuff. Why it fails with LNL right now is a mystery, probably related in some way to drivers. I expect it will be fixed in future drivers, though, because there's no good reason I can think of for Intel to not want users to have the option of using AMD's tech.
 
For those following, we have updated all the gaming test results, supplementing with 720p native, 1080p upscaled (XeSS and FSR3 where applicable), and finally the original 1080p FSR3+framegen.
Appreciate the updated testing as the results now make a lot more sense. I don't think anyone really expects LNL to necessarily be at the top due to the design, but they should be competitive.
Turns out framegen basically didn't work on Lunar Lake. We didn't know, because we didn't test without framegen (initially). But Meteor Lake gets about 50% higher perf from framegen, Strix Point got about 60% higher, and Lunar Lake... was the same in CP77 and 17% faster in Black Myth. 🤷‍♂️

FSR3 framegen should just run as a GPU compute process. It runs on everything from GTX 10-series and Polaris/Vega GPUs up through modern stuff. Why it fails with LNL right now is a mystery, probably related in some way to drivers. I expect it will be fixed in future drivers, though, because there's no good reason I can think of for Intel to not want users to have the option of using AMD's tech.
That's pretty wild and also explains the prior results.
 

Eximo

Titan
Ambassador
Intel has had long enough to develop proper drivers, it is time we stop giving them the benefit of the doubt and treat them like some kind of industry newbies whom the consumer should act as beta testers or bankroll their venture into the GPU market.
Intel is launching yet another product with trash drivers that’s what this story should be about.

We want to give them the benefit of the doubt so they become a third player in the GPU market. Which would hopefully drive cost competition.
 

watzupken

Reputable
Mar 16, 2020
1,157
643
6,070
In the desktop segment, the fastest integrated graphics remains the AM5 AMD 8700G with the 870m
I'll just leave this here:
720p medium native 100% 74 vs 88 fps
1080p medium Xess quality 60 vs 59 fps
1080p medium native 51 vs 54 fps
It is not a like for like comparison though. You need to consider that the 8700G is a 65W TDP CPU, and may even draw more power to ensure both the CPU and iGPU run at optimal speed. For mobile devices like laptops or handheld PC, you won't have this luxury. I think Intel's Battlemage looks like a reasonable upgrade over Alchemist, although I think they still have a fair bit of work to optimized their drivers. Good to see Intel rise up to the challenge and back with a good chip solution.
 
Sep 25, 2024
2
0
10
It is not a like for like comparison though. You need to consider that the 8700G is a 65W TDP CPU, and may even draw more power to ensure both the CPU and iGPU run at optimal speed. For mobile devices like laptops or handheld PC, you won't have this luxury. I think Intel's Battlemage looks like a reasonable upgrade over Alchemist, although I think they still have a fair bit of work to optimized their drivers. Good to see Intel rise up to the challenge and back with a good chip solution.

I agree. I just wanted to remind you that even the same integrated graphics card can perform differently depending on the socket. And yes, it's not even 65 watts, but 85-87 watts. I simply don't like Intel's claims that their new integrated graphics card is faster than all the others. Perhaps they "forgot" to mention that it's only true for laptops with a soldered processor.
 

watzupken

Reputable
Mar 16, 2020
1,157
643
6,070
I agree. I just wanted to remind you that even the same integrated graphics card can perform differently depending on the socket. And yes, it's not even 65 watts, but 85-87 watts. I simply don't like Intel's claims that their new integrated graphics card is faster than all the others. Perhaps they "forgot" to mention that it's only true for laptops with a soldered processor.
Its just marketing, so don't take what they claim so seriously. It is no doubt fast for a low power SOC, but I do also think the fast LPDDR5 may be one of the contributors for the performance bump. It is good to have another alternative to low power ARM based laptops if you are looking for reasonably long battery life. I don't believe it will outdo ARM based laptops when it comes to battery life (only in edge cases like 100% video playback/ streaming use case), but at least its a step in the right direction.
 

Bikki

Reputable
Jun 23, 2020
58
32
4,560
I notice Lunar Lake GPU doesn't increase core count while having 30% uplift versus AMD that push for 50% more CUs while having only 20% uplift. This is similar to how Apple use transistor budget on their CPU pipeline which becomes wider and wider each year, having performance gain without increasing number of cores.

There certainly are frictions between desktop GPU that have all the memory and power they want versus iGPU, more CUs just doesn't make sense here.
 
Aug 5, 2024
10
8
15
I'm quite impressed by the performance, I was running an Nvidia 1070 8GB until January this year and Cyberpunk was one of my favourite games of the last few years, so I have a lot of experience with how it runs after several playthroughs.
To see an iGPU matching the same FPS and settings (1080p medium, Quality upscaling at around 50fps) as my dedicated 1070 is surprising and impressive; maybe we will eventually reach a point in the next decade where low-end GPU's are overtaken and made redundant by integrated GPUs built into CPUs