Intel takes down AMD in our integrated graphics battle royale — still nowhere near dedicated GPU levels, but uses much less power

Admin

Administrator
Staff member
We've tested 24 games on Intel's latest Lunar Lake laptop, to show how it stacks up against the previous generation Meteor Lake as well as AMD's Ryzen AI 'Strix Point' processors when it comes to graphics performance. We also included results from an RTX 3050 Ti gaming laptop as a point of reference. Integrated graphics solutions continue to improve, and for the first time we can recall, Intel comes out with the win — barely.

Intel takes down AMD in our integrated graphics battle royale — still nowhere near dedicated GPU levels, but uses much less power : Read more
 
Thanks for the review, Jarred.

Is the driver support still a meme? I know you mentioned in passing you've told Intel about the glitchy games, but what about "nomal day usage" type stuff? Like changing resolutions, connecting and disconnecting displays?

I ask, because work laptops that depend on these GPUs are trash when it comes to connecting and disconnecting devices on the fly, so I'm curious if there's any of that with these ones.

What about browser hardware acceleration and such? Is the iGPU (on both) good for encoding?

Regards.
 

Notton

Commendable
Dec 29, 2023
877
787
1,260
The dream come true would be an APU with >256-bit memory bus, but using memory-on-package like LNL.

I assume it'll be too expensive.
Budget gaming laptops equipped with an i5+3050Ti are a tough nut to crack for value. You can buy new ones for as low as US$500 when they go on sale.
 
Your title is misleading. If you read it, you will think Intel is way more efficient than AMD when it is not the case. It is a draw at best on a better node.

What you are saying is that iGPU draw way less power, not that Intel is more efficient than AMD. Did you even review your article before posting it?

"Intel takes down AMD in our integrated graphics battle royale — still nowhere near dedicated GPU levels, but uses much less power"

TgsXid9BP8VqvF9TnFKTZV-1200-80.png.webp
 

Notton

Commendable
Dec 29, 2023
877
787
1,260
That's called Strix Halo. I think AMD will announce it at CES, the first week of January.
If you are talking about the supposedly leaked pictures of Strix Halo, the smaller squares are chiplets, not DRAM. If AMD had a working memory-on-package technology, they'd be advertising the crap out of it.
 
  • Like
Reactions: rtoaht

tek-check

Reputable
Aug 31, 2021
37
25
4,535
Thanks for the benchmarks, @JarredWaltonGPU .

What was the DDR speed of the memory in the AMD laptop? I recently read they enabled up to LPDDR5X-8000 on it, which might help close the gap vs. Lunar Lake.

BTW, I noticed the last image in the Cyberpunk 2077 carousel is the same as the 4th one from the Control carousel.
Current laptops are 7500. Future machines can release with 8000.
 

bit_user

Titan
Ambassador
If you are talking about the supposedly leaked pictures of Strix Halo, the smaller squares are chiplets, not DRAM. If AMD had a working memory-on-package technology, they'd be advertising the crap out of it.
Well, HBM is on-package and they're using it in the MI300A, which combines both CPU and GPU chiplets. So, technically, they do have CPUs with on-package memory.

To be honest, I seem to recall that Strix Halo was using on-package LPDDR5X, but I'm not sure about that part. It definitely has a big iGPU and 256-bit memory interface, however.

According to that article, it'll run at the exact same LPDDR5X-8533 speed as Lunar Lake is using. The main benefit of on-package is just power and area savings. In some cases, I could imagine it lets you clock a little higher, too.
 
Thanks for the review, Jarred.

Is the driver support still a meme? I know you mentioned in passing you've told Intel about the glitchy games, but what about "nomal day usage" type stuff? Like changing resolutions, connecting and disconnecting displays?

I ask, because work laptops that depend on these GPUs are trash when it comes to connecting and disconnecting devices on the fly, so I'm curious if there's any of that with these ones.

What about browser hardware acceleration and such? Is the iGPU (on both) good for encoding?

Regards.
I haven't noticed any serious issues with the drivers in regular use, other than compatibility/performance with certain games. I am sure there are far more games that have issues running in Intel still than on AMD, but determining what works and what doesn't is beyond the scope of my testing. One out of 24 games failed, a second had poor FPS consistency... and maybe three more ran too slow to be playable, but that seemed to mostly be the case on AMD as well as Intel for those games (MW5C, FFXVI, GOWR, and APTR — the last ran okay on AMD). Intel has had good video decode for a long time, but I'm not sure what specific things you're referring to. iGPUs are still slower than dGPUs, so anything that really needs graphics oomph will run better with dGPU. Using external displays has worked fine for me as well. But if there's something specific you'd like me to check, LMK.

Thanks for the benchmarks, @JarredWaltonGPU .

What was the DDR speed of the memory in the AMD laptop? I recently read they enabled up to LPDDR5X-8000 on it, which might help close the gap vs. Lunar Lake.

BTW, I noticed the last image in the Cyberpunk 2077 carousel is the same as the 4th one from the Control carousel.
DDR5-7500 on both Strix Point and Meteor Lake, DDR5-8533 on LNL. As noted in the text, that's not something we can really control and it is a difference, but that's also part of the benefit with LNL: It has on-package DRAM selected by Intel to reach a certain level of bandwidth and performance.

Your title is misleading. If you read it, you will think Intel is way more efficient than AMD when it is not the case. It is a draw at best on a better node.

What you are saying is that iGPU draw way less power, not that Intel is more efficient than AMD. Did you even review your article before posting it?

"Intel takes down AMD in our integrated graphics battle royale — still nowhere near dedicated GPU levels, but uses much less power"
Of course I did, and the whole post-emdash is about how it compares to dGPU, not AMD. It's "nowhere near dGPU levels" but "uses much less power [than a dGPU]," if you really need me to spell it out for you. The whole discussion of power was specifically about how all three iGPUs ran at the same power level.

Have they fixed the driver mess? I have tried to play Dragon Age : Origins on my Meteor Lake laptop, but it never works right.
Origins, or The Veilguard? LOL. I haven't played DAO in ages, but drivers have been much improved over the past few years. Given you have an MTL laptop, though, I wouldn't expect LNL to be any better. So if the latest 6297 drivers still have the problem you're referring to, I'm sure it applies to LNL as well. I thought about testing Veilguard, but I already had spent waaay too long on this. (I retested LNL and MTL at least twice each, with different drivers.)
 

Albert.Thomas

Respectable
Staff member
Aug 10, 2022
260
274
2,070
Origins, or The Veilguard? LOL. I haven't played DAO in ages, but drivers have been much improved over the past few years. Given you have an MTL laptop, though, I wouldn't expect LNL to be any better.
Dragon Age: Origins. In theory, this game should be super easy for Intel Graphics to run. For years I complained about the issues, and Intel fixed them with Rocket Lake/Tiger Lake. Even on only XE32, it ran very well - triple digit framerates!

However, since then the problem has returned - that is, when you enter the character creation it doesn't render *anything*. It literally just shows the loading screen. You can tell the creation screen has loaded by the sounds that are made if you click things, but you can't see what you're clicking - effectively making it impossible to pass the character creation screen.

The cutscene that loads before the character creation screen doesn't always render either - sometimes it will play and you'll just see the loading screen, but hear the cutscene. Sometimes alt-tabbing/alt-entering will fix this, but ultimately the game is broken using Intel's integrated graphics.
 

phxrider

Distinguished
Oct 10, 2013
108
54
18,670
Now test them without disabling the "proprietary" features and report on what works best in the real world.

People buy for real world performance, not theoretical compute power.

FSR is not proprietary, it's open and uses no special hardware unique to one platform or manufacturer.
 
Appreciate the testing and it seems about as expected. It'll be interesting to see how much performance is left on the table from the low GPU clocks should discrete Battlemage materialize.

If there's any future testing I think it would be interesting to somewhat emulate the handheld experience with 15W power limits (though going that low would probably need to adjust further for AMD). Another thing I haven't seen addressed by anyone yet is the difference between XeSS and FSR when actually using XMX mode. If it's enough better being able to drop down upscaling further might lead to better performance without image quality loss.
 
Now test them without disabling the "proprietary" features and report on what works best in the real world.

People buy for real world performance, not theoretical compute power.

FSR is not proprietary, it's open and uses no special hardware unique to one platform or manufacturer.
Real performance is native performance. Everything else is software to obscure the actual GPU performance. Just because FSR was made open source (eventually) doesn't mean it's not proprietary. It is literally made by AMD, and to think that AMD would create code that runs equally on Nvidia and Intel hardware is naive in the extreme.

I've seen FSR3 fail to work in games in the past, on certain non-AMD GPUs (Black Myth Wukong immediately comes to mind). I've also seen it provide a far greater performance boost on AMD GPUs than on Intel or Nvidia GPUs. There are games where it seems to work about equally well on hardware from all three manufacturers, but that should not be the default assumption.

But the real issue isn't FSR itself. It's AFMF2, Radeon Boost, and RSR. Using any of those and pretending that performance comparisons would be "fair" would be like having Cinebench render one quarter of the pixels on a CPU and then denoise for a final result and claiming that's the same as rendering every pixel.

If I used DLSS in games where only it was supported, or forced AMD to use XeSS instead of FSR (because you can use XeSS in DP4a mode on non-XMX hardware), I'd expect equally skewer results. That's also why I tested at 720p as the baseline. It's the resolution used for 1080p after upscaling in most cases.

Anyway, I do plan on looking at perf with HYPR-RX enabled, along with all the extra stuff... but the last time I tried that was not at all a rousing success. (That was maybe 3-4 months ago.) You also can't use FrameView, OCAT, or PresentMon to properly capture performance data with AFMF2 enabled, which is another problem.
 
I haven't noticed any serious issues with the drivers in regular use, other than compatibility/performance with certain games. I am sure there are far more games that have issues running in Intel still than on AMD, but determining what works and what doesn't is beyond the scope of my testing. One out of 24 games failed, a second had poor FPS consistency... and maybe three more ran too slow to be playable, but that seemed to mostly be the case on AMD as well as Intel for those games (MW5C, FFXVI, GOWR, and APTR — the last ran okay on AMD). Intel has had good video decode for a long time, but I'm not sure what specific things you're referring to. iGPUs are still slower than dGPUs, so anything that really needs graphics oomph will run better with dGPU. Using external displays has worked fine for me as well. But if there's something specific you'd like me to check, LMK.
Thanks for the reply, Jarred.

So, my use case is just OBS and running a game to record a session. I've tried using the iGPU+dGPU in my laptop and it works surprisingly well, so if you have these two with a discrete in a laptop, I'd love to see how each can handle iGPU encoding, dGPU rendering and playing a game or two. Or how the performance changes when using just the iGPU for both, as I've tried with my 5900HX and Vega can't keep up :D

Regards.
 

btmedic04

Distinguished
Mar 12, 2015
485
383
19,190
Impressive to see a solid igp from intel. I just worry with all the turmoil intel is facing, that they will axe arc due to financial reasons. Us consumers need a third gpu vendor now more than ever
 

Albert.Thomas

Respectable
Staff member
Aug 10, 2022
260
274
2,070
It is literally made by AMD, and to think that AMD would create code that runs equally on Nvidia and Intel hardware is naive in the extreme.
Uh, about that. I can't speak to FSR 3, but weren't FSR 1 and 2 and Nvidia's NIS slightly modified versions of the same base code? If that's true, I highly doubt that there's gonna be much of a bias, if any, towards AMD hardware.
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
>if you're only intending on modest gaming on a laptop, and you have good internet (with no data cap), you'd be better off with a thin and light laptop and an Nvidia GeForce Now subscription.

It's an interesting option, but I don't think the premise (modest gamer) matches the conclusion (monthly sub for higher gfx fidelity). People who want higher gfx badly enough are no longer "modest" gamers. That, and I dare say NGN's various constraints (game availability, etc) makes it unpalatable for most modest/casual gamers.

I think a good definition of a "modest" gamer is that gfx fidelity isn't the end-all. I get that that's the rage now for consumer DIY PCs, as gaming is the only activity requiring ever-faster HW, and HW sites like THW and others cater to that want. But not all PC enthusiasts are gamers, and not all gamers are "high gfx" gamers.

>Is there a market for such a monster iGPU [Strix Halo], running Windows? We'll find out, if or when AMD begins shipping Strix Point Halo.

Rumors have leaked that Halo is destined for (slim) mobile workstations w/ added AI functionality, which makes sense, from both a power-use and price standpoint. Strix Point is already positioned as flagship, and Halo will push the upper-end out of reach for normal consumer products, for both power and price. Consumers already have relatively cheap dGPUs for gaming laptops, and Halo would have a very narrow niche in that ecosystem. But as always, new tech trickles downhill, and we may see Halo for '26 or beyond.

>As for Intel, Lunar Lake takes the podium for 'pure' Windows integrated graphics right now.

Sites like THW are enamored with "best" and most every content piece revolve around that notion. But more important for most users are availability and price point. Unfortunately, Xe2 is only available in LNL this gen, which will cover for '24 and most of '25, and LNL, like Strix Point, is positioned as flagship and priced accordingly. I think more revelant to mainstream users would be the Xe+ in upcoming mobile ARL parts. Xe2 will migrate downward when Xe3 comes online next year, but that's next year.

>there are rumors and leaks that suggest Battlemage desktop cards my only offer up to 24 Xe-cores. That means, with a 2.5 GHz clock as an estimate, such a GPU would offer 15.4 teraflops of compute. If correct, that would be roughly at the level of an RTX 4060 Ti

My hope is that the rumors are false, and Intel would have enough sense to shelf their ill-advised desktop dGPU venture, given their need to focus on their core competency. There's a conflict between my desire as a PC enthusiast for more competition in the dGPU space, and my concern of Intel's well-being as an investor. But I think most sensible projections wouldn't give Intel much of a chance in dGPU anyway, so the attempt is a fool's errand regardless. There are better fish (eg DCAI) to fry.
 

Mama Changa

Proper
Sep 4, 2024
83
57
110
Impressive to see a solid igp from intel. I just worry with all the turmoil intel is facing, that they will axe arc due to financial reasons. Us consumers need a third gpu vendor now more than ever
Arc in apu is safe as banks. Panther Lake is getting Xe3 based on Celestial and Nova Lake is targeting Druid or Xe4. Discrete Arc needs to sell well though to be safe. Intel fluffing about with discrete Battlemage cards is not helpoig and targets seem to have been greatly downgraded over the last year. I don't expect top tier Battlemage to beat 4070, so it needs to be cheap, but Intel can't afford to subsidise gpus with their horrendous financials.

Still not sure why we are comparing these at Lunar lake's power level when Strix is 12 core apu intended to run at higher power. Strix has no 4+4 option to make it fairer, so reduced TDP makes it worse for AMD. Strix 370 is not a U class part. Lunar Lake proves it self-very capable in U class though. But I'm not buying a HX 370 laptop gimped to 28W, I'm buying Lunar Lake at that power. I want to unleash full capabilities of HX 370 so want a TDP in the 35-45W range.
 
Last edited:
  • Like
Reactions: bit_user

watzupken

Reputable
Mar 16, 2020
1,178
660
6,070
I think I have to give it to Intel for their GPU efforts. In 2 generations, they seem to have caught up with AMD's iGPU by quite a fair bit. Essentially, AMD is just slacking off because there is no serious competition in the iGPU space. However, I feel Intel is not out of the woods because,
1. Their GPU performance is still very inconsistent, i.e. doing very well in some games, and terrible in some,
2. From YouTube video comparison by Hubwood, I observed that the frame time graph swings a lot more than AMD's iGPU, and you can observe the video appears more juddery as a result, despite the higher avg and low FPS.
3. Lastly, with Nvidia joining the ARM siege on x86 CPUs, both Intel and AMD are going to face a formidable and very wealthy foe.

So interesting to see what happens next year.
 

watzupken

Reputable
Mar 16, 2020
1,178
660
6,070
Still not sure why we are comparing these at Lunar lake's power level when Strix is 12 core apu intended to run at higher power. Strix has no 4+4 option to make it fairer, so reduced TDP makes it worse for AMD. Strix 370 is not a U class part. Lunar Lake proves it self-very capable in U class though. But I'm not buying a HX 370 laptop gimped to 28W, I'm buying Lunar Lake at that power. I want to unleash full capabilities of HX 370 so want a TDP in the 35-45W range.
This is true, and each have its trade offs. To me, I feel AMD's APUs are probably the best of both worlds, i.e. between reasonable power consumption vs performance. It is clear that Intel went all out to win the efficiency race this time round, forgoing the performance crown that they always chase after. So while Lunar Lake is a good CPU, it may not have the processing power when you need it due to the lack of threads. It will be fine for most users, but not for everyone. So it boils down to individual use case here.
 

Bikki

Reputable
Jun 23, 2020
70
39
4,560
Thank you Jarred Wallton, fantastic piece as always.

It's clear that LNL iGPU is better than Strix, both in performance and efficiency. The handheld 'pc' has gained alot of traction lately for people who play pc games. This market hasnt seen much of Intel, the main reason is simply AMD been a better choice. But now the table has turned, let us withness the influx of Intel gaming handheld, the competition is heating up!
 
Nov 15, 2024
1
0
10
I'm sorry but why on earth did you test using an AMD laptop with only 8 gigs of RAM?! (In a single channel configuration to boot.) That completely nullifies any and all benchmarks using that laptop. Windows isn't even very happy on its own with only 8 gigs... Let alone running only 8 on an iGPU system that is going to need that RAM as well! Absolutely insane choice.