News Intel Unveils Xe DG1 Mobile Graphics in Discrete Graphics Card for Developers

Warframe is a 7 year old game. If Xe DG1 struggles with Warframe, then I don't want to see how it handles something modern like Monster Hunter World or Forza Horizon 4. I'd like to see more competition in the GPU space, but I'm not seeing any reason to take Intel seriously yet. Too much marketing; not enough performance.

Get it together, Intel!
 
"An executive told us to put integrated graphics into a dedicated GPU, and nobody knows why"
-A lead engineer at intel, probably

Really, though, What do you think the chances are that DG1 die is just a faulty CPU?
 
Warframe is a 7 year old game. If Xe DG1 struggles with Warframe, then I don't want to see how it handles something modern like Monster Hunter World or Forza Horizon 4. I'd like to see more competition in the GPU space, but I'm not seeing any reason to take Intel seriously yet. Too much marketing; not enough performance.

Get it together, Intel!

Considering its a mobile GPU being shown off and the information TH put in here its too early to judge it for the desktop market. But if its as powerful as they claim it to be we could see a decent iGPU form Intel to compete with AMD.

"An executive told us to put integrated graphics into a dedicated GPU, and nobody knows why"
-A lead engineer at intel, probably

Really, though, What do you think the chances are that DG1 die is just a faulty CPU?

I doubt it.

Considering that Intel is planning a discrete GPU for the desktop market and a HPC variant they are probably already developing stand alone GPU dies for testing.
 
We've already seen dual-GPU systems die out over time, so transparently implementing a multi-chip architecture is going to be key.

I disagree with this comparison. I think there are some pretty stark differences between what Intel is doing today compared to what nV and ATI were doing back in the day. From the ground up Intel is designing this thing to be multichiplette based device. I half expect the same driver that runs a meager 24EU device to work the 96EU or even a 384EU device. I think that since the first place they are deploying this design is a super computer lends further evidence that the quantity of chiplettes has little effect on the firmware. In a super computer they could have 1000s of these devices networked together operating as a single massive processing unit.

There are certainly going to be limitations but I doubt they will be as painful as they were strapping two full chips together.
 
Warframe is a 7 year old game. If Xe DG1 struggles with Warframe, then I don't want to see how it handles something modern like Monster Hunter World or Forza Horizon 4. I'd like to see more competition in the GPU space, but I'm not seeing any reason to take Intel seriously yet. Too much marketing; not enough performance.

Get it together, Intel!

As was stated in the article, this is early silicon. This means the drivers have an enormous amount of optimizations and bug fixes to be done, and the silicon likely needs to be tweaked once it's ready for the market. The developers of the games/game engines also need to optimize for this hardware, so again, more software work to be done. Also, being that this is a sub 75W card, don't expect screaming performance.
 
I disagree with this comparison. I think there are some pretty stark differences between what Intel is doing today compared to what nV and ATI were doing back in the day. From the ground up Intel is designing this thing to be multichiplette based device. I half expect the same driver that runs a meager 24EU device to work the 96EU or even a 384EU device. I think that since the first place they are deploying this design is a super computer lends further evidence that the quantity of chiplettes has little effect on the firmware. In a super computer they could have 1000s of these devices networked together operating as a single massive processing unit.

There are certainly going to be limitations but I doubt they will be as painful as they were strapping two full chips together.

Honest truth is we should be screaming for Intel to have a competitive GPU. We have had a duopoly for so long that AMD and Nvidia have been guilty of working together to price fix.

Having a third option to put pressure on the other two is a great thing. AMD alone has been unable to really get prices down. Maybe Intel will be able to.
 
"An executive told us to put integrated graphics into a dedicated GPU, and nobody knows why"
-A lead engineer at intel, probably

Really, though, What do you think the chances are that DG1 die is just a faulty CPU?
That's actually not a bad idea. Could be making graphics cards for dirt-cheap for builds that need an iGPU but don't need anything for gaming (cough cough Ryzen). Dead CPUs would be scrap anyways, so no cost there, strap on a $10 heat sink and $3 PCB and boop, you got a GPU capable of DX12 and 3/4 monitors for the price of a few coffees.
 
  • Like
Reactions: FeritT
"An executive told us to put integrated graphics into a dedicated GPU, and nobody knows why"
-A lead engineer at intel, probably
AMD's RX 550 has only 8 CUs, whereas their APUs have up to 11. So, it's not unprecedented to have a dGPU that's the same size or smaller than an iGPU.

Really, though, What do you think the chances are that DG1 die is just a faulty CPU?
No, that would be silly. Not least, because the dGPU will want GDDR5 or better, which the CPU doesn't support. The RX 550 I mentioned supports GDDR5, giving it 112 GB/sec of memory bandwidth, which about 3x as fast as the DDR4 that 1st gen Ryzen APUs of the day could support. On bandwidth-bound workloads, that would give it an edge over a comparable iGPU.

Also, for security reasons, some of the iGPUs registers & memory might not be accessible from an external PCIe entity.
 
  • Like
Reactions: TJ Hooker
Dead CPUs would be scrap anyways, so no cost there,
Defects that would kill the entire CPU would probably be in places like the ring bus, memory controller, PCIe interface, or other common areas that would also compromise the iGPU's functionality.

It would be very unlikely to have defects in so many cores, especially if the iGPU is then somehow unaffected (or minimally so).
 
As was stated in the article, this is early silicon. This means the drivers have an enormous amount of optimizations and bug fixes to be done, and the silicon likely needs to be tweaked once it's ready for the market. The developers of the games/game engines also need to optimize for this hardware, so again, more software work to be done. Also, being that this is a sub 75W card, don't expect screaming performance.
Warframe is one of the least demanding and most optimized games Intel could have chosen to show off their new graphics solution. I see the notes about early hardware and software as excuses coming from a huge company with tons of resources and experience. I'm unwilling to give Intel the positive marketing hype they're seeking here because I don't think they've delivered anything significant, yet. We should praise Intel and get excited only when their Xe products show a step up from the Iris and HD Graphics solutions commonplace on Intel CPUs today.
 
  • Like
Reactions: bit_user
Warframe is one of the least demanding and most optimized games Intel could have chosen to show off their new graphics solution. I see the notes about early hardware and software as excuses coming from a huge company with tons of resources and experience. I'm unwilling to give Intel the positive marketing hype they're seeking here because I don't think they've delivered anything significant, yet. We should praise Intel and get excited only when their Xe products show a step up from the Iris and HD Graphics solutions commonplace on Intel CPUs today.

Optimized or not the game is not optimized for this hardware and early drivers always tend to have worse performance than final release and beyond. They also showed off Destiny 2 which is a newer game thats much more demanding running on Xe hardware.

If you actually look at their Gen 10 graphics they have already provided a boost. This logically should since its increasing EUs as well.
 
Really, though, What do you think the chances are that DG1 die is just a faulty CPU?

I don't see why the CPU in such a scenario would need to be faulty. These are development cards. It'd be no great loss to let a working CPU to sit idle. The CPU could actually be used to coordinate multi-GPU operation. Maybe that's the lesson Intel learned from Larrabee/Phi: the flexibility of x86 cores serve a useful purpose, one just doesn't need that many.
 
Warframe is one of the least demanding and most optimized games Intel could have chosen to show off their new graphics solution. I see the notes about early hardware and software as excuses coming from a huge company with tons of resources and experience. I'm unwilling to give Intel the positive marketing hype they're seeking here because I don't think they've delivered anything significant, yet.
I think the point of the demo is to show that they have working silicon. However, it's almost certainly at the level of engineering samples, and we have no idea what sorts of bugs and workarounds might be present.

So, the message isn't "look how fast it is", but rather "look, we're making progress and have functioning silicon".

We should praise Intel and get excited only when their Xe products show a step up from the Iris and HD Graphics solutions commonplace on Intel CPUs today.
Yes. I would say that actual praise should be withheld until we have actual & independent benchmarks showing that it's competitive within the price & power bracket where they're positioning it.
 
  • Like
Reactions: bigdragon
Optimized or not the game is not optimized for this hardware and early drivers always tend to have worse performance than final release and beyond.
It's more than that. This is pre-release silicon. They typically do several respins of a chip before launch, out of necessity. Otherwise, they'd just save the time and money.

So, any missing driver optimizations could pale, in comparison with whatever issues the silicon is having.
 
I don't see why the CPU in such a scenario would need to be faulty. These are development cards. It'd be no great loss to let a working CPU to sit idle.
True, it's plausible. But, the leak about them releasing a real dGPU product, with the same number of EUs, suggests it's probably that.

The CPU could actually be used to coordinate multi-GPU operation. Maybe that's the lesson Intel learned from Larrabee/Phi: the flexibility of x86 cores serve a useful purpose, one just doesn't need that many.
It's not impossible, but putting another host processor in between your host processor and your GPU(s) is not going to reduce latency. And cutting latency is a big deal, in realtime graphics - especially for eSports enthusiasts running at crazy high FPS.

I think a bigger lesson they probably took from the Xeon Phi fiasco is that x86 doesn't belong inside GPUs.

Even though I happen to disagree, I do respect your insights. Welcome to the forums.
 
I don't see why the CPU in such a scenario would need to be faulty. These are development cards. It'd be no great loss to let a working CPU to sit idle.

Intel has been facing supply issues, so they have been looking for ways to use faulty chips that otherwise would have to be thrown away.
I think some executive may have seen the KF series processors with faulty graphics selling relatively well and directed engineering to find a way to sell faulty CPUs where the graphics are still functional (whether or not that was actually feasible on a technical level).

The end result may have ended up on dev boards because only a limited quantity of chips failed in the exact way where that was possible. Or maybe the cost/performance was so poor that Intel realized the best way to cut their losses on that idea was to give away the boards that they've produced so far instead of paying to market a non-competitive product.

If its not that, wouldn't it have been a lot cheaper and easier for intel to just give full working CPUs with integrated graphics out for developers to use while waiting for the real GPUs to reach the market?
Did intel put it's previous generations of graphics onto dedicated boards like this, or is this the first time?
I just don't know what need Intel is trying to fill to make it workth the effort.. Which makes me think that either it was easy, or a mistake. Maybe it exists simply because Ice-Lake can't socket into a workstation?
 
Intel has been facing supply issues, so they have been looking for ways to use faulty chips that otherwise would have to be thrown away.
I think some executive may have seen the KF series processors with faulty graphics selling relatively well and directed engineering to find a way to sell faulty CPUs where the graphics are still functional (whether or not that was actually feasible on a technical level).

The end result may have ended up on dev boards because only a limited quantity of chips failed in the exact way where that was possible. Or maybe the cost/performance was so poor that Intel realized the best way to cut their losses on that idea was to give away the boards that they've produced so far instead of paying to market a non-competitive product.

If its not that, wouldn't it have been a lot cheaper and easier for intel to just give full working CPUs with integrated graphics out for developers to use while waiting for the real GPUs to reach the market?
Did intel put it's previous generations of graphics onto dedicated boards like this, or is this the first time?
I just don't know what need Intel is trying to fill to make it workth the effort.. Which makes me think that either it was easy, or a mistake. Maybe it exists simply because Ice-Lake can't socket into a workstation?

I doubt it. And not always. Considering that Intels plans are to potentially sell the DG1 as a discrete card that can work in conjunction with the iGPU a-la SLI/CFX they would want one that is using the PCIe bridge out in the developers hands to work with and optimize for it.
 
At least they have something to show for it even though it's weaker compared to the rest of the current flagships offered by both NVIDIA and AMD. Remember, greatness takes time 😉.
 
  • Like
Reactions: bit_user
If its not that, wouldn't it have been a lot cheaper and easier for intel to just give full working CPUs with integrated graphics out for developers to use while waiting for the real GPUs to reach the market?
Did intel put it's previous generations of graphics onto dedicated boards like this, or is this the first time?
I just don't know what need Intel is trying to fill to make it workth the effort.. Which makes me think that either it was easy, or a mistake. Maybe it exists simply because Ice-Lake can't socket into a workstation?
What do you consider to be a "real" GPU? What makes you think Intel has a "full working CPU with [gen 12] integrated graphics"?

With regard to what they've done in the past, they've never produced a discrete GPU before so I don't think what they've done in the past is necessarily relevant.
 
What do you consider to be a "real" GPU? What makes you think Intel has a "full working CPU with [gen 12] integrated graphics"?

With regard to what they've done in the past, they've never produced a discrete GPU before so I don't think what they've done in the past is necessarily relevant.

Considering they were showing off Tiger Lake laptops that have Xe iGPUs playing games and will be releasing this year I would say Intel has working CPUs with Xe iGPUs.

Intel actually has produce discrete GPUs in the past. The i740 was, technically, the first AGP based GPU and was released to market:

https://www.anandtech.com/show/202

It was reviewed but it sucked.

And while Larabee didn't amount to anything for consumers it did become a GPGPU in the HPC space so Intel has experience in both markets it intends to release in.