News Nvidia's DLSS tech now in over 760 games and apps — native and override DLSS 4 support has broad reach

As much as I dislike games leaning on AI like DLSS and people and publications saying they can do things like 4k120 max details with a midrange card with DLSS, it also helps cards age more gracefully since it is getting more widespread, which is important in the age of near $1000 mid range cards, and it is a boon to consoles and handheld with limited GPUs.

As for AMD all they have to do is flood the market with cards cosing half as much as Nvidia and their market share will boom, as will FRSS adoption, like they did with the HD 4000 and 5000 series cards. Undercutting them by $100 doesn't work in the age of AI, and not having DLSS support is quite a bit more important in the long run than not having PhysX.





Also, yes, DLSS is an algorithm, but in 2025 any algorithm can be called AI.
 
I read it another way.
"Nvidia's helped devs make over 700 poorly optimized games"

dlss was made to help lower powered machines run games...which is great.

Dev's however didn't do that and instead used it to save money and bruteforce games w/ higher req.
From the "MSAA is just cutting corners from proper SSAA!", "mip-mapping isn't rendering all your texture texels at full resolution, CHEATING!" and "VRR is jsut a crutch for not meeting [insert arbitrary performance threshold here]!" departments.

Games have performed well and performed poorly long before DLSS existed, and will continue to do so.
 
As for AMD all they have to do is flood the market with cards cosing half as much as Nvidia and their market share will boom, as will FRSS adoption, like they did with the HD 4000 and 5000 series cards. Undercutting them by $100 doesn't work in the age of AI, and not having DLSS support is quite a bit more important in the long run than not having PhysX.
LOL. And all Intel has to do is flood the market with Battlemage GPUs sold at a loss and it can also gain massive market share!

Realistically, the reason AMD's RDNA 4 graphics cards cost as much as they cost is because they need to generate a profit. It could reduce prices a bit, but "half as much" as Nvidia's cards would only be viable if we're talking about the RTX 5080 — a card that's faster than AMD's top solution by a decent amount. Best-case, I think AMD could probably sell 9070 XT for $499, and 9070 for maybe $399. But margins would be razor thin in that case and it wouldn't stop scalpers and AIBs from pushing the prices higher. Witness the current going rate of $800+ for RX 9070.
 
My guess is Intel ARC breaks even after all the R&D, and purchasing relatively large dies from TSMC.
This is acceptable from a standpoint where ARC (Xe cores) is a key component of their mobile chips.
Lunar Lake iGPU also manages to beat Ryzen Z1E by a hair.

As for DLSS, when used properly it's not a crutch.
However, it's quite evident that quadruple-A gaming companies, like noobisoft, have decided it's a crutch for their AAAA gaming titles, and it's an eyesore.
We can thank games like... SW: Outlaws (noobisoft) for being an eyesore.

Where as DLSS+FG+RG (Depending on the game) look just fine in Space Marine 2, Wukong, MH:Wilds, Indiana Jones: GC, Alan Wake 2, etc.
Although some of those games still have LoD pop in issues despite not running out of VRAM, looking at you Indiana Jones, and MH:Wilds.

And then there are titles that have a mixed bag like... Cyberpunk 2077.

I can't be bothered to go through the entire list of games.
 
Games have performed well and performed poorly long before DLSS existed, and will continue to do so.
yes, but it was never as harmful as now.
Look at games like starfield, MHWilds, etc.

they are crap quality for what they demand because you can bruteforce it via dlss.

Yes, devs cheaped out on optimizing in past, but its never been as common as now days...because dlss makes it easier to do.

Some devs do optimize games properly (doom eternal & bg3 being prime examples) but its way more common to have games need high req and they don't feel like soemthing that should need it because they cheaped out on optimizing.
 
My guess is Intel ARC breaks even after all the R&D, and purchasing relatively large dies from TSMC.
This is acceptable from a standpoint where ARC (Xe cores) is a key component of their mobile chips.
Lunar Lake iGPU also manages to beat Ryzen Z1E by a hair.
I suspect if Intel is real, it has lost billions on its GPU efforts over the past five years. I don't know how many people worked on Alchemist and Battlemage, but it's got to be hundreds. Each of those probably earning in the 200K or more range. R&D for a dedicated GPU is massive.

I think the hardware for a B580 is a little bit more than break even at $250. But the problem is you have several years of R&D that needs to be covered. There's no way Intel makes more than $50 off of a B580, and even that's being VERY generous. How many would Intel need to sell at $50 net profit to cover even $1 billion? Well, that's easy math: 20 million.

Does anyone — anyone!? — actually think Intel has sold more than five million Arc GPUs? I certainly don't. The Steam Hardware Survey shows "Intel(R) Arc(TM) Graphics" at 0.22% of the total, and that's going to be Alchemist laptops most likely. Maybe if every one of the 9.7% of the "Other" category of GPUs is an Arc graphics card, it might be 20 million. But considering the "Other" category has been at around 8~10 percent for as long as I can remember, I'm pretty confident it's not millions of Arc GPUs hiding in there.
 
I suspect if Intel is real, it has lost billions on its GPU efforts over the past five years. I don't know how many people worked on Alchemist and Battlemage, but it's got to be hundreds. Each of those probably earning in the 200K or more range. R&D for a dedicated GPU is massive.

I think the hardware for a B580 is a little bit more than break even at $250. But the problem is you have several years of R&D that needs to be covered. There's no way Intel makes more than $50 off of a B580, and even that's being VERY generous. How many would Intel need to sell at $50 net profit to cover even $1 billion? Well, that's easy math: 20 million.

Does anyone — anyone!? — actually think Intel has sold more than five million Arc GPUs? I certainly don't. The Steam Hardware Survey shows "Intel(R) Arc(TM) Graphics" at 0.22% of the total, and that's going to be Alchemist laptops most likely. Maybe if every one of the 9.7% of the "Other" category of GPUs is an Arc graphics card, it might be 20 million. But considering the "Other" category has been at around 8~10 percent for as long as I can remember, I'm pretty confident it's not millions of Arc GPUs hiding in there.
Much of that R&D will be recovered via integrated graphics, net profit targets are typically around 20% company wide (cost centre, rather) so they could indeed be taking a net loss on Arc whilst making it up in other areas. I work for a very large international and we will absolutely take a hit to break into a market, as long as we can hit that magic 20%.
 
Much of that R&D will be recovered via integrated graphics, net profit targets are typically around 20% company wide (cost centre, rather) so they could indeed be taking a net loss on Arc whilst making it up in other areas. I work for a very large international and we will absolutely take a hit to break into a market, as long as we can hit that magic 20%.
That's the big question, though: How much did Intel expand its graphics division to create Ponte Vecchio, Alchemist, and Battlemage? How much of the additional cost was purely for dGPU as opposed to the iGPU variants? And the Ponte Vecchio sequel got axed. That was a huge cost I'm sure. Intel is definitely trying to catch up, and maybe it still can, but I don't believe for an instant that the dedicated GPUs have been a success.

Even Battlemage B580, which is much better overall than Alchemist, is not doing that well. There aren't enough of them to go around, prices are higher than expected, and the result is that Intel isn't making as many as it needs to make. I'd really love to know how many BMG-G21 wafers Intel ordered from TSMC! I suspect even 10,000 is probably higher than the real number, but who knows?

(10K with 272 sqmm per chip would be a maximum of around 212 chips per wafer. That would mean up to 2.1 million BMG-G21 chips if Intel did 10K wafers... but I suspect the real number might be more like a couple thousand. This is, however, just a seat of the pants guess. I'm skeptical that even Nvidia has done 10K wafers for any of the Blackwell RTX chips! Long-term it will do that many, but short-term it's probably 5K or less for GB202/GB203 it seems.)
 
  • Like
Reactions: CelicaGT
LOL. And all Intel has to do is flood the market with Battlemage GPUs sold at a loss and it can also gain massive market share!

Realistically, the reason AMD's RDNA 4 graphics cards cost as much as they cost is because they need to generate a profit. It could reduce prices a bit, but "half as much" as Nvidia's cards would only be viable if we're talking about the RTX 5080 — a card that's faster than AMD's top solution by a decent amount. Best-case, I think AMD could probably sell 9070 XT for $499, and 9070 for maybe $399. But margins would be razor thin in that case and it wouldn't stop scalpers and AIBs from pushing the prices higher. Witness the current going rate of $800+ for RX 9070..

Yes, actually, they're going to have to sell them at a loss in order to gain market share and get their technologies supported more broadly. On any given month they have around a 15% market share on the Steam hardware survey compared to Nvidias 75%, right now it's 17% vs 75%, and with that much difference there's zero reason for any game studio to not go just with Nvidia technologies. Also now is logically the best time for them to do it as the enterprise market is making gobs of profit and can easily cover any loss in gaming cards, not to mention the RTX 5000 series is fairly lackluster vs the 4000 series and they seem to constantly receive negative press so they are more vulnerable now than they have been since all those years ago when AMD pulled the HD 4870 out of nowhere. Once they have more parity in market share they can raise the prices back up, assuming performance is also in parity. Once the enterprise market starts drying up, and it will because AI will move to more efficient dedicated accelerators over Gpus at some point, it'll be harder to justify a hole in the balance sheet.

It's basically the old tactic of a "loss leader", and AMD is in the strongest position they have been in a long time to be able to play it to its fullest, the biggest issue would be having to mandate retailers sell them at those prices and not take exorbitant markups for profits.
 
From the "MSAA is just cutting corners from proper SSAA!", "mip-mapping isn't rendering all your texture texels at full resolution, CHEATING!" and "VRR is jsut a crutch for not meeting [insert arbitrary performance threshold here]!" departments.
I know it's not the point of your post, but the first half is exactly what I think when I see people refer to DLSS and FG as "fake frames", yet didn't call anti-aliasing "fake straight lines" or anisotropic filtering as "fake angled perspective", etc

From growing up in the 90's and watching graphics and their rendering change, it feels like most supplemental graphics options we have are a workaround to make something APPEAR as though it's perfectly rendered, when in reality there are several "fake" processes that are using tricks to improve the visual fidelity, and I think upscaling and frame generation tools aren't much different.
 
I know it's not the point of your post, but the first half is exactly what I think when I see people refer to DLSS and FG as "fake frames", yet didn't call anti-aliasing "fake straight lines" or anisotropic filtering as "fake angled perspective", etc

From growing up in the 90's and watching graphics and their rendering change, it feels like most supplemental graphics options we have are a workaround to make something APPEAR as though it's perfectly rendered, when in reality there are several "fake" processes that are using tricks to improve the visual fidelity, and I think upscaling and frame generation tools aren't much different.
That was actually the point of my post: All the frames are fake, after all, the GPU just makes them up on the spot. All real-time rendering techniques are a teetering tower of hacks to do everything possible to avoid performing any rendering that is not absolutely necessary.
 
  • Like
Reactions: JarredWaltonGPU