News Intel is looking into CPU overhead associated with Arc GPUs on older chips

There are 2 things that Intel needs to do for their GPUs,
1. Optimize for older CPUs, and,
2. Improve performance consistency - I think it is evident that the average and 1% low FPS may fluctuate a lot more on Intel GPUs. So while the GPU appears to be doing well when looking at average FPS, the user experience can be more stuttery than I like.

It is still great value dGPU, but needs further refinement. I do hope that they mitigate these 2 mostly through future driver updates.
 
There are 2 things that Intel needs to do for their GPUs,
1. Optimize for older CPUs, and,
Isn't like the number one complaint about x86 of many people that the legacy support is holding it back and making it inefficient?!
Why would we want that for GPUs as well?!
Older CPUs ( that will be able to handle this tier of GPU) will be phased out fast enough that it will not affect them that much.
 
I'd be nice if Intel launched the B380 and B310. There's zero stock for those in my local and the B580/570 have an unreasonable price premium.
I just need something with 4 display out and AV1 decode/encode, which my R7 5800X3D does not feature.
 
TechYesCity tested this with a 10400t and against both Nvidia and AMD comparable GPUs. Nvidia did better but the RX 7600 was within 1% of the performance differential of the B580 from the 9800x3d. I don't think the difference in either case was more than 10% so this is apparently an AMD CPU issue more than a B580 issue since if it were an Arc issue it would also be a Radeon issue.

Nvidia did a good thing with their driver based CPU load distribution back in the Kepler days.
 
TechYesCity tested this with a 10400t and against both Nvidia and AMD comparable GPUs. Nvidia did better but the RX 7600 was within 1% of the performance differential of the B580 from the 9800x3d. I don't think the difference in either case was more than 10% so this is apparently an AMD CPU issue more than a B580 issue since if it were an Arc issue it would also be a Radeon issue.
It's not an AMD CPU issue at all it's a problem with the drivers, design or combination of both on Intel Arc cards. It doesn't appear in every game, and only appears when CPU limited. When HUB originally contacted Intel about it they confirmed the findings.
 
  • Like
Reactions: iLoveThe80s_90s
There are 2 things that Intel needs to do for their GPUs,
1. Optimize for older CPUs, and,
2. Improve performance consistency - I think it is evident that the average and 1% low FPS may fluctuate a lot more on Intel GPUs. So while the GPU appears to be doing well when looking at average FPS, the user experience can be more stuttery than I like.

It is still great value dGPU, but needs further refinement. I do hope that they mitigate these 2 mostly through future driver updates.
There's no such thing as optimizing for older CPUs here. The problem this article is referring to is Arc cards creating a CPU bottleneck where others do not. I would not be surprised if performance consistency was related to whatever the cause of this is.

edit: I'm also not sure it's a problem that can be entirely fixed as they likely would have already done so. This may be similar to the design issues on Alchemist which held back performance. If it cannot be fixed that would mean perhaps Celestial (maybe Druid, but it has been a known issue since Alchemist).
 
Last edited:
  • Like
Reactions: iLoveThe80s_90s
It's not an AMD CPU issue at all it's a problem with the drivers, design or combination of both on Intel Arc cards. It doesn't appear in every game, and only appears when CPU limited. When HUB originally contacted Intel about it they confirmed the findings.
HUB is trying too hard to create a narrative. Gamers Nexus did not find a significant difference between the 9800X3D, 12400 and 5600X when Battlemage cards were GPU limited (as they will be almost all of the time), except for the occasional shortcomings of the 5600X.:
View: https://youtu.be/m9uK4D35FlM?t=1034
Tech Yes City found what I mentioned in my previous post.:
View: https://youtu.be/mVC2eP9xmXQ?t=316
If you take in all of the testing to see the whole picture then you have the worst performance drops coming from older Ryzen CPUs when compared to the fastest CPUs on Arc GPUS, and when tested, AMD GPUs with budget Intel CPUs have an equivalent dropoff to Arc.

The presented evidence supports this if you include all reputable sources that have first hand testing.
The existence of that lopsided evidence is largely due to HUB cherry picking the worst case scenario Battlemage performance drops with weaker CPUs and those worst cases happened to be with Ryzen chips only and only with Arc compared to Nvidia, and not Radeon.

HUB has all sorts of evidence of just how poorly older Ryzen perform in gaming when they are not helped by Nvidia's driver based enhanced multithreading. But no evidence that the more common older budget Intel CPUs suffer from the same problem with Arc any more than they would when paired with a Radeon. It would be a more compelling argument that it is the fault of Arc and not Ryzen if they could show that Arc is the culprit and not Ryzen don't you think? But they can only show the bad performance when the two are paired in scenarios specifically taxing on the Ryzen.

Do I think Intel GPUs fare worse with cheap CPUs than Radeon? Yes, but the difference is likely nowhere near what HUB or others mining AMClicks insinuate and affect users less than implied because budget GPUs are generally run in heavily GPU limited scenarios. I also think it is fitting that HUB is making the case that old Ryzen games poorly in their attempt to portray Arc in a bad light. But they have gotten many more clicks than if they would have shown the more boring big picture.
 
  • Like
Reactions: iLoveThe80s_90s
HUB is trying too hard to create a narrative. Gamers Nexus did not find a significant difference between the 9800X3D, 12400 and 5600X when Battlemage cards were GPU limited (as they will be almost all of the time), except for the occasional shortcomings of the 5600X.:
View: https://youtu.be/m9uK4D35FlM?t=1034
Tech Yes City found what I mentioned in my previous post.:
View: https://youtu.be/mVC2eP9xmXQ?t=316
If you take in all of the testing to see the whole picture then you have the worst performance drops coming from older Ryzen CPUs when compared to the fastest CPUs on Arc GPUS, and when tested, AMD GPUs with budget Intel CPUs have an equivalent dropoff to Arc.

The presented evidence supports this if you include all reputable sources that have first hand testing.
The existence of that lopsided evidence is largely due to HUB cherry picking the worst case scenario Battlemage performance drops with weaker CPUs and those worst cases happened to be with Ryzen chips only and only with Arc compared to Nvidia, and not Radeon.

HUB has all sorts of evidence of just how poorly older Ryzen perform in gaming when they are not helped by Nvidia's driver based enhanced multithreading. But no evidence that the more common older budget Intel CPUs suffer from the same problem with Arc any more than they would when paired with a Radeon. It would be a more compelling argument that it is the fault of Arc and not Ryzen if they could show that Arc is the culprit and not Ryzen don't you think? But they can only show the bad performance when the two are paired in scenarios specifically taxing on the Ryzen.

Do I think Intel GPUs fare worse with cheap CPUs than Radeon? Yes, but the difference is likely nowhere near what HUB or others mining AMClicks insinuate and affect users less than implied because budget GPUs are generally run in heavily GPU limited scenarios. I also think it is fitting that HUB is making the case that old Ryzen games poorly in their attempt to portray Arc in a bad light. But they have gotten many more clicks than if they would have shown the more boring big picture.
GPU limited testing will obviously never show a CPU limit. Testing solely this way is disenguous at best because games change based on where you are.

Starfield is a great example where most places are never going to be CPU limited, but those that are can absolutely tank performance without enough CPU. If you're using an Arc GPU this sort of circumstance will be significantly worse. If you looked at Hardware Canucks video they showed 9th gen Intel experiencing the exact same problems.

The truth is very simple: there's a CPU overhead problem with Arc GPUs. This won't always be a problem, but when it is it can absolutely tank performance. You trying to make it out to be overblown or an AMD problem is a really bad look dismissing valid criticism of the GPUs.
 
Testing solely this way is disenguous

If you looked at Hardware Canucks video they showed 9th gen Intel experiencing the exact same problems.
They have the same bad testing as HU, they compare a x3d cpu to a non x3d cpu, it doesn't matter if the non x3d one is amd or intel.
We all know that intel gpus need rebar to work well and rebar is a way to move data between the cpu and gpu in a faster manner, something that you would think that a large cache would help immensely.
They don't show a CPU overhead because they would have to test this on the same exact cpu by disabling cores or clocking them down, that would show if it's actually cpu overhead, removing the cache only shows that cache helps rebar.
 
  • Like
Reactions: iLoveThe80s_90s
They have the same bad testing as HU, they compare a x3d cpu to a non x3d cpu, it doesn't matter if the non x3d one is amd or intel.
We all know that intel gpus need rebar to work well and rebar is a way to move data between the cpu and gpu in a faster manner, something that you would think that a large cache would help immensely.
They don't show a CPU overhead because they would have to test this on the same exact cpu by disabling cores or clocking them down, that would show if it's actually cpu overhead, removing the cache only shows that cache helps rebar.
Go look at the Tom's review as Jarred tested with both the 13900K and 9800X3D and there's functionally no difference. If the cache somehow made the B580 faster it would show there too.

No matter what though the bottom line is that Intel confirmed with HUB saying they saw the same results. Intel has also now put out this post clearly indicating there are issues. This is not some imaginary problem it's very real. It's not a huge problem and would never stop me from buying an Arc card for a lower end system (so long as it had ReBar support), but it's something very valuable to be aware of.
 
No matter what though the bottom line is that Intel confirmed with HUB saying they saw the same results. Intel has also now put out this post clearly indicating there are issues.
Do you mean this post where they just acknowledge their awareness of some people reporting this?! This just means that they are looking into it.

“Thank you for your patience. We are aware of reports of performance sensitivity in some games when paired with older generation processors. We have increased our platform coverage to include more configurations in our validation process, and we are continuing to investigate optimizations.”
 
Do you mean this post where they just acknowledge their awareness of some people reporting this?! This just means that they are looking into it.
As you conveniently ignore me pointing out that HUB made them aware of it over 3mo ago and got acknowledgement from Intel.
Intel is aware of these findings and is actively investigating the issue.
https://www.techspot.com/review/2940-intel-arc-b580-rereview/

They also tested it with 6 CPUs:
But yeah it's totally not an issue and definitely something Intel is just now looking into.
 
As you conveniently ignore me pointing out that HUB made them aware of it over 3mo ago and got acknowledgement from Intel.

https://www.techspot.com/review/2940-intel-arc-b580-rereview/
They have no quote though, this is the only thing they say and it could just be them misinterpreting the actual quote tom's has from intel.
Aware of these findings = We are aware of reports of performance sensitivity in some games when paired with older generation processors.

"Intel is aware of these findings and is actively investigating the issue."
They also tested it with 6 CPUs:

But yeah it's totally not an issue and definitely something Intel is just now looking into.
It could absolutely be an issue with intel but if you introduce unknows into the equation then you can't be sure that the outcome is actually what you think it is.
Yeah they have 6 different cpus, that was my point from the beginning, 6 different CPUs that all have different things (mainly different cache)

To prove that it is down to cpu overhead they have to do the test with one single cpu and disable cores and/or clock them down.
That's the only way to prove that nothing else factors into this issue except for the cpu.
 
Have had a B580 for about 3 months now but only play WoW, Elden Ring and Assassins Creed Shadows. Have had zero issues with drivers and great performance at 1440. I know it is easy to claim there are problems if you don't own one, but for me the driver issues on the newest hardware are non-existent.
 
  • Like
Reactions: rluker5
GPU limited testing will obviously never show a CPU limit. Testing solely this way is disenguous at best because games change based on where you are.

Starfield is a great example where most places are never going to be CPU limited, but those that are can absolutely tank performance without enough CPU. If you're using an Arc GPU this sort of circumstance will be significantly worse. If you looked at Hardware Canucks video they showed 9th gen Intel experiencing the exact same problems.

The truth is very simple: there's a CPU overhead problem with Arc GPUs. This won't always be a problem, but when it is it can absolutely tank performance. You trying to make it out to be overblown or an AMD problem is a really bad look dismissing valid criticism of the GPUs.
That Hardware Canucks video showed the average difference between a 6c6t 9600k with 8 lanes of PCIe gen 3 and a 9800X3D with PCIe gen 4 was 20% with a B580, 11% with a 4060, 8% with a 7600 and 12% with a 6700XT. At 1080p. If you take the sum of frames of the 9600k/sum 9800X3D from the 14 games they looked at.
HWU showed that if you used a Ryzen 2600 instead, the B580 lost 39% of it's frames on average and the 4060 lost 29% vs a 9800X3D. Per the table included in this article.
Proving once again that using an old Ryzen has more negative effect on your entry level GPU performance than anything else.
I'm being a bit sarcastic with that last one because HWU is just using the 5 bad example games that exist, and Hardware Canucks also used them, but also showed that basically all other games don't have the B580 relative performance loss with weaker CPUs issue.

HWU is cherry picking to the point of being completely unrealistic. While the points about those 5 games are valid, without them they have nothing and taking the example of 5 games performance and pretending that they are all games is just being dishonest for clicks. That and HWU is putting on a show of just how bad old Ryzens are.
 
this great news hopefully it works on first generation arc as well. more competition in lower space is good

i have issues with intel Arc B580 + Ryzen 5 2600 (FPS) as a test a b450 board thats going to cut the bandwidth down to pci gen 3.0 x8 i assume its a b450 as 2600 isnt supported on b550.

even intel says b550 board needed i assume for the cut bandwidth more then rebar.

https://www.intel.com/content/www/u...hics/intel-arc-dedicated-graphics-family.html
 
Last edited:
  • Like
Reactions: rluker5
HWU is cherry picking to the point of being completely unrealistic. While the points about those 5 games are valid, without them they have nothing and taking the example of 5 games performance and pretending that they are all games is just being dishonest for clicks. That and HWU is putting on a show of just how bad old Ryzens are.
Perhaps you should read the article I linked above if you're not going to actually watch their videos before making false proclamations like that. The video you're talking about was their first one on the topic and at the end they mentioned needing to do more research. They did at least 2-3 videos on the topic and the only thing I disagree with them on is their conclusion regarding how much of an advantage Intel needs to be a viable choice. The points they bring up about the performance are accurate.
 
The inevitable result of constant model releases muddying the waters instead of stable generational feature set jumps. The older driver sets have already delt with every little niggle as it popped up and completely forgotten about it by now and Intel is blundering into this mess it helped create.

I hope they manage to sort this out but they might at some point be forced to just say "it wont work right on anything older than X". It looks if anything like a marathon not a sprint.... and that's ok if they can sustain it.
 
Perhaps you should read the article I linked above if you're not going to actually watch their videos before making false proclamations like that. The video you're talking about was their first one on the topic and at the end they mentioned needing to do more research. They did at least 2-3 videos on the topic and the only thing I disagree with them on is their conclusion regarding how much of an advantage Intel needs to be a viable choice. The points they bring up about the performance are accurate.
The linked HWU video in this article features which 5 games? Also, per the link provided by beyondlogic, the 9600k and Ryzen 2600 are both below Intel's minimum recommended requirements for the B580.

You won't find them in the prebuilt PCs at Amazon, Bestbuy and Walmart that seem to be where the B580s seem to be mostly going.
 
The linked HWU video in this article features which 5 games? Also, per the link provided by beyondlogic, the 9600k and Ryzen 2600 are both below Intel's minimum recommended requirements for the B580.

You won't find them in the prebuilt PCs at Amazon, Bestbuy and Walmart that seem to be where the B580s seem to be mostly going.

It should be noted rebar for 2600 wasn't officially supported by amd they focused on the 3000 series and 5000 series and I can confirm that the rebar on a 2600 is roughly as it's mainly supported threw the board manufacturer
 
It should be noted rebar for 2600 wasn't officially supported by amd they focused on the 3000 series and 5000 series and I can confirm that the rebar on a 2600 is roughly as it's mainly supported threw the board manufacturer
I'm not trying to rip on old AMD CPUs so much as trying to show that HWUs arguments against ARC is so cherry picked as to be nonsense that can be used to prove other points. Sure the 2600 did worse in games than the i3 10100 (quad core Skylake arch) but I'm sure it averages far better in games with a B580 than the worst 5 games in the worst scenarios HWU can reasonably pick.

The other end of the spectrum, on the GPU limited side, GN found little effective difference in switching CPUs with GPUs on the level of the B580.

I think most people are more likely to run into GPU limitations when using an entry level GPU, and if they have a barely compatible CPU with W11 they are probably aware that it will hold back their system relative to the fastest CPUs out there in some situations. HWU is trying to pretend that these users will see HWU's results averaged across all the games they play the way they play them and that is a lie.

The elephant in the room IMO is that DLSS, XeSS and FSR4 are so much better than FSR1,2,3 in image quality that it is like comparing FXAA to SSAA, but DLSS, XeSS and FSR4 can be compared to native, and the industry standard in tech reporting is to just brush this disparity under the rug. With Nvidia there are hundreds: https://www.nvidia.com/en-us/geforce/news/nvidia-rtx-games-engines-apps/ , XeSS 171: https://game.intel.com/us/xess-enabled-games/ and FSR4 33:
https://www.pcguide.com/news/fsr-4-games-list-a-complete-list-of-all-the-latest-supported-titles/ modern, popular games where every dGPU that can't use these has an effective 25% performance disadvantage. In these games a B580 performs like a 7700XT or 6800, for example.

Almost anybody that has an Nvidia or Intel GPU that is at the point of choosing graphics settings for best image/fps is going to use these and the whole tech reporting industry has been ignoring this in benchmark rankings. Pretending these settings don't exist. Smaller outfits that are basically one guy are not going to tip over the apple cart and a lot of the big guys want no part in controversy. But big outfits some do sell controversy like HWU and GN and why they have been effectively lying to us by omission for years is starting to get suspect. I'm curious of how they will handle this now that AMD is starting to get a fraction of the benefits that Nvidia and Intel already have on this. Will they drop the RX 6000, 7000 series cards to their appropriate performance levels? I doubt it, they will probably just bring it up as a selling point as for why the RX9000 series is better than everything else.
 
The problem with this reasoning is I cannot remember any time previously where a GPU had hard minimum CPU specific requirements for anything.

This kind of thing is relatively new for the end user so it tends to cast a large shadow every time it pops up.

On the other hand, most people going by any statistic only use features like rebar or upscaling if they are automatically enabled and most people don't need them because they run 1080p. GN and HWU have fallen into a bubble of high end hardware and their struggle tends to be understanding the common user because they themselves are so used to "the best" they get frustrated that it's not "the average".

However..... what they do kind of understand here is the problems that low end users have started to face. The low end user is much more likely to run into problems these days because they are more likely to have older CPU's that are coupled with what we are being gaslit into believing is low end GPU's..... rather than hobbled mid range GPU's because the high end is rebranded luxury enthusiast and mostly actually experimental.

What this means is on a lot of PC's Intel GPU's will have lower performance and the end user won't understand why unless it's specifically pointed out to him. Most people don't upgrade their CPU's unless they for some reason need to upgrade their motherboards.... I myself have never did a CPU or even RAM upgrade it's always been a complete platform one.

And herein lies the catch people have been falling into.... it's not just PCIe speed that effectively matters for GPU's anymore it's gotten much more complicated.
 
What this means is on a lot of PC's Intel GPU's will have lower performance and the end user won't understand why notice unless it's specifically pointed out to him.
There, fixed it for you.
Unless somebody is watching hub or gn or whatever they won't even notice, they will still have a good time with the card.
Someone with a low end card/cpu isn't really in the position to play every AAA game right away, or ever, anyway, they have cheap equipment for a reason.
 
  • Like
Reactions: rluker5
The problem with this reasoning is I cannot remember any time previously where a GPU had hard minimum CPU specific requirements for anything.

This kind of thing is relatively new for the end user so it tends to cast a large shadow every time it pops up.

On the other hand, most people going by any statistic only use features like rebar or upscaling if they are automatically enabled and most people don't need them because they run 1080p. GN and HWU have fallen into a bubble of high end hardware and their struggle tends to be understanding the common user because they themselves are so used to "the best" they get frustrated that it's not "the average".

However..... what they do kind of understand here is the problems that low end users have started to face. The low end user is much more likely to run into problems these days because they are more likely to have older CPU's that are coupled with what we are being gaslit into believing is low end GPU's..... rather than hobbled mid range GPU's because the high end is rebranded luxury enthusiast and mostly actually experimental.

What this means is on a lot of PC's Intel GPU's will have lower performance and the end user won't understand why unless it's specifically pointed out to him. Most people don't upgrade their CPU's unless they for some reason need to upgrade their motherboards.... I myself have never did a CPU or even RAM upgrade it's always been a complete platform one.

And herein lies the catch people have been falling into.... it's not just PCIe speed that effectively matters for GPU's anymore it's gotten much more complicated.

Intel gpus are t really like standard GPUs were used to like and and Nvidia there a bit weird in that regard. They need Sam to work properly.