DLSS is still a proprietary Nvidia solution that uses Nvidia's tensor cores, so Intel would need to build an alternative from the ground up, then get developers to use it. Which is why it would be easier to support FSR, since that will have AMD behind it and is presumably well under way in development. Of course, we still need to see how FSR looks and performance -- on AMD and Nvidia GPUs. But despite having the "most GPUs" in the wild, Intel gets very little GPU support from game devs, since all of its current GPUs are slow integrated solutions.
Only nVidia's implementation is proprietary, not the theoretical construct itself. And of course it has to be this way, because this is exactly the same AMD (maybe together with Microsoft; see DirectML) tries to achieve with its Super Resolution alternative.
Simply viewing at a single, isolated frame won't work; the AI-based algorithm has to take multiple frames and motion vectors into account to be competitive with DLSS 2.x.
Additionally the usage of Tensor Cores is only optional. nVidia uses (restricts to) them due to marketing reasons and because they speed up necessary AI-calculations, but for example early beta-implementations of DLSS 2 were solely processed on ALUs/CUDA-cores.
AMD will have to process these calculations also via their ALUs, because RDNA2 does not have special hardware or ISA-extensions.
Intel's Xe has DP4a for speeding up inferencing workloads by a huge amount. This is no dedicated hardware, but this ISA-extension still manages to provide a significant performance gain for those types of workloads.
Notes on some common misconceptions:
a) DLSS 2.x needs no special treatment. The technology has to be implemented into the engine and it runs with the unified NN out of the box.
Note that there exists already a universal DLSS 2-implementation in Unreal Engine 4, therefore its usage should continue to increase, because it is now available to every UE developer.
b) DLSS and competing technologies should have similar requirements with regards to game engines, therefore it should be quiet easy for a developer, that if he/she has already implemented one technology, he/she can easily add another.
c) If Intel uses a completely different name like "
XeSS", then it is most likely, that this is their own technology and no adoption of AMDs Super Resolution. Additionally this might be plausible, because Intel has more manpower and more AI knowledge than AMD and therefore there might be no need for them to rely on AMD to deliver something production ready ... eventually.
d) Special hardware like Tensor Cores or ISA-extensions like DP4a are always only optional for AI processing. The regular ALUs are sufficient, but a more specialized chip design will be faster in this case, therefore it is an advantage to have this functionality.
For example also Microsoft's DirecML tries to utilize Tensor Cores (and other hardware alike), if the GPU provides it.