You need specialized hardware to make effective and efficient neural networks. It's a hardware solution that can't be easily simulated with software.
I said you'd probably need the equivalent of tensor cores. Given those, yes you can abstract it to the level of "software" that runs on a GPU.
Take video encoding .... Sure you can do it in software with a CPU but it's considerably slower than a hardware encoder like the one built into all modern GPUs .....
It's probably not as if Nvidia GPUs have a 'DLSS' instruction. They simply have these tensor cores, and DLSS makes use of them in a filter stage that runs at the end of the rendering pipeline. Don't mystify it.
If Nvidia owns the IP the solution there is to license it from Nvidia at say $10 or $20 per GPU made that uses it.
No, there's no mechanism forcing them to license it. That's entirely at their discretion.
Even a lot of the so-called Open Standards are licensed and bought by the hardware vendors.
Yes, there's a lot of confusion about this point. "Open" doesn't necessarily mean "free". It just means that it's open for all to see.
Most people don't realize it but every manufacturer that made Compact Cassette players or Compact Cassette tapes have to pay a per unit licensing fee.
My understanding is that HDMI has rather high royalties, while DisplayPort has either low or no royalties. It's really up to the standards organization how to fund themselves and pay patent holders with IP in the pool. HDMI is from CEC, while DisplayPort is from VESA. The point is that different organizations can have different licensing policies.
None of these is like DLSS, which is a Nvidia-
proprietary technology -
not an open standard!
The same thing is true for CD-ROM, DVD, and Blu Ray. There are literally dozens if not hundreds of other examples of that. (For instance all the Dolby and DBX technologies are licensed technologies)
Just about every single example you cited depends on an ecosystem of hardware and software, in order to be successful. Nvidia dominates the GPU hardware market in a way that it doesn't need any other GPUs to implement DLSS, in order to attract software developers to support it.
What no one talks about is the 10's of billion of dollars Nvidia has sunk into developing these technologies since 2011.
Eh, because there's not a lot to be said about it?
They took a huge risk with CUDA and again with AI neural networks that could have very well bankrupted them if they hadn't panned out as well as they have.
They made a gamble and executed it well. I believe Jensen, when he says they had no sort of "5 year plan". They just picked a direction and went with it. It paid off. Don't make it into something it's not.
Risk deserves reward and without the reward there is no sense in even taking a risk in the first place and then everything stagnates to the least common denominator
I don't even know where this is coming from. Nobody said they aren't entitled to sell their DLSS and CUDA, for that matter. Nobody has to use it, though, regardless of how much Nvidia sunk into it.
On the flip side, if I'm a game developer, I still want my software to run well on
all GPUs, and I don't want to waste effort on one implementation for Nvidia, one for AMD, and one for Intel. To say nothing of Qualcomm. So, that's why we have things like Direct3D and Vulkan, which were created to abstract away differences between different hardware, and why the software vendors naturally want a hardware-independent solution to upsampling.