News Intel Confirms Poor Arc GPU DX11 Performance Is a Work in Progress

rluker5

Distinguished
Jun 23, 2014
914
594
19,760
It is good for that information to come out. More specific information would be better, but we will get that with reviews.
AMD and Nvidia already have performance differentials in different APIs with AMD still lagging, but improving recently with the older ones on their RDNA2 arch.
Some like AMD gpus better because they are a better value in the games, settings they play, and others who like different games, settings that are a better performance value with Nvidia can't understand why somebody would prefer AMD.
Looks like Intel will be similar to AMD in this, only with the driver deficiencies and improvements being relatively larger.
Reviews will give more information and buyers can then better choose based on the performance value in the games they play and expect to play.

Edit: I also think that Intel releasing cheaper, smaller gpus initially works out well with these deficiencies. It gives people an opportunity to take a smaller chance (in terms of money) to check them out and increases the userbase to better detect problems to be solved. I hope they release them soon.
 
Last edited:

dehjomz

Distinguished
Dec 31, 2007
81
55
18,610
Makes you wonder if iGPU performance has been abysmal not because of iGPU hardware, but because of iGPU drivers... If/when Intel optimizes its DX11/DX9 drivers, might UHD graphics benefit as well, and thus age like fine wine?
 

rluker5

Distinguished
Jun 23, 2014
914
594
19,760
Makes you wonder if iGPU performance has been abysmal not because of iGPU hardware, but because of iGPU drivers... If/when Intel optimizes its DX11/DX9 drivers, might UHD graphics benefit as well, and thus age like fine wine?
I believe the igpu performance has been bad for 2 different reasons, one for desktop and one for mobile.
For desktop the igpu performs on par per tflop with AMD. AMD igpus are much larger though. You wouldn't expect a 192 shader dgpu to be as fast as a 512 shader dgpu, but people for some reason expect that with an igpu.
For mobile, Intel has a windows power problem where the igpu is completely ignored and the cpu gets all of the power it can use to run max clocks possible. If the power priority were set to the igpu (since any games on igpu are completely igpu limited) and the cpu would be power throttled to low clocks instead, the overall performance would be much better.
 
  • Like
Reactions: TCA_ChinChin

EirikrHinnRauthi

Reputable
Jul 8, 2019
2
1
4,515
One would wonder if Intel could use a "wrapper" similar to Proton/Wine/DXVK for Windows games on Linux --- but in this case instead of running DX9/10/11 on Vulkan on Linux -- run it on DX12 or on Vulkan on Windows!

Boom.
 
  • Like
Reactions: rluker5

rluker5

Distinguished
Jun 23, 2014
914
594
19,760
One would wonder if Intel could use a "wrapper" similar to Proton/Wine/DXVK for Windows games on Linux --- but in this case instead of running DX9/10/11 on Vulkan on Linux -- run it on DX12 or on Vulkan on Windows!

Boom.

Like DXVK with AMD cards?
Could you imagine a toggle switch in their driver gui that automates this?

piff.
 
One would wonder if Intel could use a "wrapper" similar to Proton/Wine/DXVK for Windows games on Linux --- but in this case instead of running DX9/10/11 on Vulkan on Linux -- run it on DX12 or on Vulkan on Windows!

Boom.
That would NOT fix any performance issues the hardware has, it would run just as bad and even worse since now there would be another software layer.
It's a matter of certain instructions not performing well and it wouldn't make a difference if you run these instructions natively or emulated it would still be the same instructions that would have to run.
 
Basically, Intel's lack of experience in the discrete GPU driver space will prevent their GPUs from being competitive with older APIs for quite some time.
They could easily still be "competitive" in games utilizing those older APIs, even if performance won't be where it could be. According to Intel, they plan to price the cards based on how they perform in DX9/11 titles, so one could arguably look at their much better DX12/Vulcan performance as a bonus relative to the competition. It's possible that they could even outperform the competition at similar price points in most older titles, making up for the unoptimized drivers by providing more hardware with lower or nonexistent profit margins to help them make a good first impression.

It's actually kind of similar to what we saw with AMD's Polaris cards, or with the early generations of Ryzen CPUs. AMD's offerings, while decent, weren't exactly leading in terms of high-end performance at the time of their launch, but they priced the hardware accordingly and gave more hardware for the money to make up for it, allowing their products to be very competitive despite their limitations.

I imagine there will probably be certain titles where Intel's cards perform worse than the similarly-priced competition, but as long as performance is still reasonable in those titles, while being better in most others, that shouldn't hold them back too much. So performance of the cards may not be much of a concern. My main concerns would be over how well the various side-features and control panel settings work, and whether there are any compatibility issues with anything.
 
  • Like
Reactions: rluker5

rluker5

Distinguished
Jun 23, 2014
914
594
19,760
That would NOT fix any performance issues the hardware has, it would run just as bad and even worse since now there would be another software layer.
It's a matter of certain instructions not performing well and it wouldn't make a difference if you run these instructions natively or emulated it would still be the same instructions that would have to run.
It works for AMD. That's probably where he got the DXVK wrapper idea from. It improves performance in support lacking apis. Sounds like a bit of a hassle, like reshade or texmod, but more worth it imo. That's why I thought it would be nice if Intel used some of their driver staff hours to set up some automated button to do it for us lazies.
 

JayNor

Honorable
May 31, 2019
458
103
10,860
Sounds like Intel is headed the other direction, reducing the frequency of updates for the older GPUs would be consistent with reducing the effort on dx11 and earlier apis.

The more interesting stuff coming is in the hardware architecture, with the tGPU on Meteor Lake. There will be a hotchips presentation on it in a few weeks. I'm interested to see how wide they went on the CPU to GPU connection, and assuming this will be use of their ucie design, to reduce the differences in the discrete vs tile GPUs.
 
D

Deleted member 14196

Guest
Makes you wonder if iGPU performance has been abysmal not because of iGPU hardware, but because of iGPU drivers... If/when Intel optimizes its DX11/DX9 drivers, might UHD graphics benefit as well, and thus age like fine wine?
Absolutely not there I GPU stinks royally. Crap hardware. No software is going to save it