News Intel's Iris Xe DG1 GPU Is Seemingly Slower Than The Radeon RX 550

WTF is Intel doing, releasing a new (GPU/Video Card) that loses to a 4 year old card on 14 nm while DG-1 is on the more advanced 10nm process.

Makes you wonder if poaching Raja Koduri was really worth it?
 
Apart from their integrated Xe graphic solution, so far the dedicated graphics side of things are not looking good. With Raja beating around the bush showing chip but not performance, it just reminded me of the same trend when AMD revealed Vega.

I suspect Raja and/or Intel may have underestimated the performance of the current gen of AMD and Nvidia cards. And so, I don't think even the top end Xe card for the consumer market will be competitive in terms of performance. Just my guess.
 
WTF is Intel doing, releasing a new (GPU/Video Card) that loses to a 4 year old card on 14 nm while DG-1 is on the more advanced 10nm process.
The DG1 was primarily released as a development tool and OEM special. The board does not even have its own BIOS so you need a motherboard with a BIOS build that has baked-in DG1 support if you want to use it at boot.

DG1 is not intended for sale to consumers, it is basically a high-functioning prototype.
 
This level of performance shouldn't be at all surprising. Intel's new UHD 750 graphics found in their Rocket Lake CPUs might be an improvement over UHD 630, but they still apparently only offer less than half the performance of the Vega 11 integrated graphics found in AMD's quad-core Ryzen APUs from the last few years. UHD 750 utilizes 32 EUs, while this dedicated card only increases that to 80, without increasing clock rates much and only using LPDDR4 for VRAM. So performance shouldn't be much better than Vega 11, or roughly in the ballpark of an entry-level RX 550 or GT 1030 from several years back. It sounds like the consumer-facing cards might go up to 512 EUs though, and will likely utilize GDDR6, which should get them at least into the mid-range, if not higher.
 
If the cards have the ability to share workload with the IGP on the processor that should give them a boost too.

Very few high end GPUs are sold so Intel may be only targeting the fattest part of the market. And would probably be wise to do so.

WTF is Intel doing, releasing a new (GPU/Video Card) that loses to a 4 year old card on 14 nm while DG-1 is on the more advanced 10nm process.

In the current market Nvidia is re-releasing two year old cards and they will probably sell well.
 
  • Like
Reactions: cyrusfox
The DG1 was primarily released as a development tool and OEM special. The board does not even have its own BIOS so you need a motherboard with a BIOS build that has baked-in DG1 support if you want to use it at boot.

DG1 is not intended for sale to consumers, it is basically a high-functioning prototype.

This ^^^^^^ To be honest I'm very surprised to see DG1 in a consumer build at all. DG2 is where the magic is supposedly going to be for Intel
 
So many words saying so little... basic point:
50W old gen card vs new Intel IGPU 30W,
The Intel DG1 puts up a Vulkan score of 17,289 points
Radeon RX 550 scored 17,619 points
Radeon card slightly faster at 67% more power consumption...
This is Intels showing 1st year drivers. Its all about efficiency here, if it is good, should scale well.

I would buy one if they didn't lock the bios down, Make it work on my Z490 board and I would happily see if this could replace my GT1030
 
If the cards have the ability to share workload with the IGP on the processor that should give them a boost too.
For an extreme low-end card like this 80 EU DG1, that could potentially help performance a bit. However, it's unlikely to provide tangible benefits to a mid-range gaming card, or even fairly low-end gaming cards. Again, the performance provided by UHD 750 is less than half that of AMD's few-year-old integrated graphics found in their APUs, or a GT 1030 or RX 550. And even that graphics hardware was super-low-end when it launched years ago, only offering around a quarter of the performance of something like a GTX 1060 6GB or RX 580 (or the newer and slightly faster 1650 SUPER and 5500 XT for that matter).

So even late-2019 lower-end gaming cards positioned at a $160-$170 MSRP offer performance that is somewhere close to 10 times that of Rocket Lake's integrated graphics. Meaning best-case scenario, the integrated graphics might theoretically be able to boost performance of a card at that level by up to 10% or so, or even less for mid-range to high-end cards. However, even that's not likely to happen, since the integrated graphics won't have direct access to the data and framebuffer in the video card's VRAM. So using them together would most likely not help performance to any perceptible degree in games, and is probably more likely to just make performance less-stable, if anything.

AMD actually tried something like that a number of years back in the pre-Ryzen days, but it only benefited performance at all when paired with a limited number of very low-end graphics cards, and often made performance worse.
 
I would buy one if they didn't lock the bios down
They didn't lock the BIOS down, DG1 cards have no BIOS whatsoever. That's why they require motherboards with baked-in DG1 support.

Again, the performance provided by UHD 750 is less than half that of AMD's few-year-old integrated graphics found in their APUs, or a GT 1030 or RX 550.
To be fair, the vast majority of PCs and laptops are used for office and other relatively trivial work that only requires some form of video output powerful enough to display the Windows desktop or equivalent. It makes sense for Intel to only include the smallest common denominator for Xe in most of its SKUs. I bet many AMD lovers wish the IOD had an IGP even half as powerful as Rocket Lake just to be able to do something useful with the motherboard's display outputs.
 
  • Like
Reactions: cyrusfox
To be fair, the vast majority of PCs and laptops are used for office and other relatively trivial work that only requires some form of video output powerful enough to display the Windows desktop or equivalent.
I was responding specifically to the suggestion that the integrated graphics could potentially be combined with a dedicated card to boost performance. Rocket Lake's level of graphics performance might be fine for typical desktop use, and even usable for running some older and lighter games at reduced settings, but compared to any current-generation gaming card, the capabilities are so limited that combining them is not likely to improve the experience.

I agree that it would be nice if AMD provided the option for integrated graphics on more of their processors, though leaks suggest that Zen 3 desktop APUs featuring up to 8 cores should be launching soon. Last year's Zen 2 based 4000-series APUs were OEM-only, but at the time of their launch, an AMD representative suggested that their successors would be coming to retail. According to AMD's director of technical marketing...

...while I cannot go into the details of our roadmap, there is a next-gen APU coming for DIY customers, and it will fit into 400- and 500-series boards. So if those enthusiasts are reading the news tomorrow and thinking ‘where's my upgrade?!’ I promise it’s coming.

Whether those will be available in sufficient quantities or at a competitive price is anyone's guess though.
 
I was responding specifically to the suggestion that the integrated graphics could potentially be combined with a dedicated card to boost performance.
Combining a really weak IGP with a weak GPU never worked before and isn't going to start working now. Every time that it has been possible in the past, you were just better off using the GPU alone.

As I wrote earlier though, the bulk of Intel's desktop market is office PCs where it doesn't make sense to waste silicon on more than the bare minimum IGP necessary to draw windows reasonably fast and you can just stick a GPU in when that isn't enough. Mobile CPUs get IGPs 2-3X as fast as desktops since add-on graphics usually aren't an option.

Regarding the Ryzen 5000 APUs, I'm expecting them to be ludicrously priced assuming they become available for retail since a 5600G will be a much larger slab of 7nm silicon than a 5600X. It will be an SKU that only makes some sort of sense thanks to the currently grossly inflated GPU prices.
 
I was responding specifically to the suggestion that the integrated graphics could potentially be combined with a dedicated card to boost performance. Rocket Lake's level of graphics performance might be fine for typical desktop use, and even usable for running some older and lighter games at reduced settings, but compared to any current-generation gaming card, the capabilities are so limited that combining them is not likely to improve the experience.
If the xe still has all of its asics then it has a pretty strong AI "core" that devs could use for whatever reason, it could be used to get AI upscaling without losing FPS for example.
I'm not saying that this will happen because of course it won't, I'm just saying that there is more than just the traditional texture filling we all know about.
 
If the xe still has all of its asics then it has a pretty strong AI "core" that devs could use for whatever reason, it could be used to get AI upscaling without losing FPS for example.
I'm not saying that this will happen because of course it won't, I'm just saying that there is more than just the traditional texture filling we all know about.
Sure, maybe you could offload something like physics or 3D sound processing to the integrated graphics while the main card does the rendering. But again, with its total performance only amounting to not much more than maybe 5% of a dedicated upper-mid-range card, it's probably not going to make all that noticeable of a difference.

I don't think it would work particularly well for something like real-time upscaling along the lines of DLSS though. It would likely have to wait until after the dedicated card finishes rendering a frame to begin the upscaling process, so it would add at least a frame of latency. Whereas the dedicated card could get the processing done during the same frame with a minimal performance hit, since it would be operating at many times the speed. Also, you might end up transferring a lot of data back and forth between the card and CPU, and have additional system RAM usage that could potentially have an effect on performance. So it would probably be best left to something not directly tied to the rendering process. And even then, it might require developer support, and different levels of integrated graphics (like UHD 730 vs 750) would perform differently.
 
I don't think it would work particularly well for something like real-time upscaling along the lines of DLSS though.
That is probably the worst possible use since DLSS probably taps the rendering pipeline for extra info and would need to stream that along with the raw image. Not much point in delegating the post-processing to a secondary GPU when the main GPU is using tensor cores that aren't really used for anything else to do DLSS anyway.

Using the IGP for GPGPU makes the most sense when the IGP is already powerful enough to deliver results faster than the data is coming in, in which case sending data over to the GPU only increases latency by an extra roundtrip over PCIe, wastes power, eats a chunk of GPU power and VRAM you may want to save for something else.

Audio DSP would be one of the most logical things for game developers to offload on the IGP where present, enabled and not used for 3D.