News Intel's Upcoming DG2 Rumored to Compete With RTX 3070

"Intel doesn't need a DLSS alternative..."
Intel has been opening rendering and ray tracing software and converting them to work with oneAPI. They may want to write generic dpc++ code for dlss, but take better advantage of their device specific features in their GPU level zero code.
 
"Intel doesn't need a DLSS alternative..."
Intel has been opening rendering and ray tracing software and converting them to work with oneAPI. They may want to write generic dpc++ code for dlss, but take better advantage of their device specific features in their GPU level zero code.
DLSS is still a proprietary Nvidia solution that uses Nvidia's tensor cores, so Intel would need to build an alternative from the ground up, then get developers to use it. Which is why it would be easier to support FSR, since that will have AMD behind it and is presumably well under way in development. Of course, we still need to see how FSR looks and performance -- on AMD and Nvidia GPUs. But despite having the "most GPUs" in the wild, Intel gets very little GPU support from game devs, since all of its current GPUs are slow integrated solutions.
 
  • Like
Reactions: artk2219
DLSS is still a proprietary Nvidia solution that uses Nvidia's tensor cores, so Intel would need to build an alternative from the ground up, then get developers to use it.
But intel's new GPUs do have an AI "core" asic or whatever it is, it has AI capable hardware, the issue would be to do all the training it needs to make the results look good, well if the AI is good/fast enough that is.
Does this even need any input from developers?!
This is just taking whatever resolution and is upscaling it, right?!
 
DLSS is still a proprietary Nvidia solution that uses Nvidia's tensor cores, so Intel would need to build an alternative from the ground up, then get developers to use it. Which is why it would be easier to support FSR, since that will have AMD behind it and is presumably well under way in development. Of course, we still need to see how FSR looks and performance -- on AMD and Nvidia GPUs. But despite having the "most GPUs" in the wild, Intel gets very little GPU support from game devs, since all of its current GPUs are slow integrated solutions.
Only nVidia's implementation is proprietary, not the theoretical construct itself. And of course it has to be this way, because this is exactly the same AMD (maybe together with Microsoft; see DirectML) tries to achieve with its Super Resolution alternative.
Simply viewing at a single, isolated frame won't work; the AI-based algorithm has to take multiple frames and motion vectors into account to be competitive with DLSS 2.x.
Additionally the usage of Tensor Cores is only optional. nVidia uses (restricts to) them due to marketing reasons and because they speed up necessary AI-calculations, but for example early beta-implementations of DLSS 2 were solely processed on ALUs/CUDA-cores.
AMD will have to process these calculations also via their ALUs, because RDNA2 does not have special hardware or ISA-extensions.
Intel's Xe has DP4a for speeding up inferencing workloads by a huge amount. This is no dedicated hardware, but this ISA-extension still manages to provide a significant performance gain for those types of workloads.

Notes on some common misconceptions:
a) DLSS 2.x needs no special treatment. The technology has to be implemented into the engine and it runs with the unified NN out of the box.
Note that there exists already a universal DLSS 2-implementation in Unreal Engine 4, therefore its usage should continue to increase, because it is now available to every UE developer.
b) DLSS and competing technologies should have similar requirements with regards to game engines, therefore it should be quiet easy for a developer, that if he/she has already implemented one technology, he/she can easily add another.
c) If Intel uses a completely different name like "XeSS", then it is most likely, that this is their own technology and no adoption of AMDs Super Resolution. Additionally this might be plausible, because Intel has more manpower and more AI knowledge than AMD and therefore there might be no need for them to rely on AMD to deliver something production ready ... eventually.
d) Special hardware like Tensor Cores or ISA-extensions like DP4a are always only optional for AI processing. The regular ALUs are sufficient, but a more specialized chip design will be faster in this case, therefore it is an advantage to have this functionality.
For example also Microsoft's DirecML tries to utilize Tensor Cores (and other hardware alike), if the GPU provides it.
 
Last edited:
But intel's new GPUs do have an AI "core" asic or whatever it is, it has AI capable hardware, the issue would be to do all the training it needs to make the results look good, well if the AI is good/fast enough that is.
Does this even need any input from developers?!
This is just taking whatever resolution and is upscaling it, right?!
Unless Intel (and/or AMD) builds a DLSS alternative that doesn't require direct game support — meaning, it's built into the drivers — it would need dev support. It's 'intelligent' upscaling that supposed to anti-alias as well as enhancing details. What's really happening with the code? That's a bit harder to determine. Nvidia knows that it's running on the Tensor cores, but it's a weighted network that was trained by feeding it a bunch of data. Could that network run on AMD and Intel hardware? Theoretically, but right now it's part of Nvidia's drivers and Nvidia wouldn't want to provide any help to its competitors.
 
Having more video cards on the market ought to be helpful to the current situation with supply issues.
Sure ... except they're being manufactured at TSMC, which is already maxed out on capacity. So for every DG2 wafer made, some other wafer can't be made. If DG2 really is N6, it's perhaps less of a problem, but TSMC is still tapped out.
 
Notes on some common misconceptions:
a) DLSS 2.x needs no special treatment. The technology has to be implemented into the engine and it runs with the unified NN out of the box.
Note that there exists already a universal DLSS 2-implementation in Unreal Engine 4, therefore its usage should continue to increase, because it is now available to every UE developer.
If I recall correctly, the only thing that DLSS 2.0 requires now is that the game engine supports some form of temporal anti-aliasing since it uses previous frames to reconstruct details in current frames.
 
I'd like to see the silly rumors stop--didn't people get tired of the last round of idiotic rumors about discrete Intel GPUs--competitive with nothing more than the very bottom of the value market. Intel is quite a way behind AMD CPUs atm, and the distance Intel is behind AMD and nVidia in the GPU markets is practically incalculable...😉
 
DLSS is still a proprietary Nvidia solution that uses Nvidia's tensor cores, so Intel would need to build an alternative from the ground up, then get developers to use it. Which is why it would be easier to support FSR, since that will have AMD behind it and is presumably well under way in development. Of course, we still need to see how FSR looks and performance -- on AMD and Nvidia GPUs. But despite having the "most GPUs" in the wild, Intel gets very little GPU support from game devs, since all of its current GPUs are slow integrated solutions.
If Intel gets a DLSS equivalent out that could allow their Xe level iGPU's to run modern games at 1080p 60fps with good quality settings, they would get developer support. If basically every laptop Intel sold was suddenly a realistically usable gaming machine that would be a game changer (no pun intended). Same thing for desktop top CPU's. Intel's desktop iGPU's are weaker than the mobile versions, but they will continue to get faster over time. If every Dell desktop sold could be a decent low end gaming machine, developers will support it if Intel gives them the necessary support. The target for these technologies should not be extreme fringe gamers trying to game at 8k. It should be aimed at bringing up the masses at the low end, and raising their floor.
 
If the rumors of 46TFlop FP64 are true, Intel may have leapfrogged both NVDA and AMD.
yea, cause we know how accurate specs are on paper, when it comes to how many TFlops a vid card does, hasnt amd always (pre rdna/rdna2 ) been given flack for this ? TFlops either on par or better then nvidia, but when the card is released, it doesnt compete.
until intels cards are out, and been tested /reviewed by toms/AT/GN/etc, its just speculation and rumour.
 
If I recall correctly, the only thing that DLSS 2.0 requires now is that the game engine supports some form of temporal anti-aliasing since it uses previous frames to reconstruct details in current frames.
I don't think that DLSS relies on any other AA method. ;-)
The only thing, and that's the reason why it needs to be explicitly implemented, is that the engine has to export certain data e. g. motion vectors, for the algorithm to work.
For example the neuronal network is trained and improved by nVidia (with generic data) and an implementation in a game engine can use the generalized NN out-of-the-box, therefore no special training, especially not for a specific title or even levels is needed (as it was in v1). Of course, the developer can fine-tune/train a special version of the NN for his title, but in most cases I would expect that developers would skip such an additional efford.
And additionally, as long as the API hasn't changed, e.g. it's still DLSS 2.x with the same input data requirements, nVidia can provide an updated NN with a newer driver and it will improve rendering for all v2.x games, if the developer allows an update (meaning he/she has not implemented something specialized in his/her DLSS-implementation in a particular game).

yea, cause we know how accurate specs are on paper, when it comes to how many TFlops a vid card does, hasnt amd always (pre rdna/rdna2 ) been given flack for this ? TFlops either on par or better then nvidia, but when the card is released, it doesnt compete.
until intels cards are out, and been tested /reviewed by toms/AT/GN/etc, its just speculation and rumour.
THW will never test these "cards" because JayNor referred to Xe-HPC. This is an high end HPC/datacenter design and no GPU.
Additionally, Ponte Vecchio is a huge design and it is quiet obvious, that this will outperform a "simple" monolithic design like the (G)A100 and this is the reason why nVidia is working on Hopper as a massive MCM-design for 2022.
 
Last edited: