AMD Radeon VII to Support DLSS Equivalent

fball922

Distinguished
Feb 23, 2008
179
24
18,695
I don't normally comment on the grammar of articles here, but if this was reviewed by an editor... Yikes. This paragraph in particular is a mess.

However[,] one thing it does enable is a form of anti-aliasing that has an effect[-] and performance hit[-] similar to DLSS[,] [y]et one that’s compatible with AMD’s latest hardware[,] as opposed to just specifically Nvidia’s Tensor cores. That’s a big deal, and if Radeon VII supports this as standard on launch, it could make the card far more appealing than we first gave it credit for ["initially" is redundant]. After all, multi-source standards like this[, not needed] are typically more likely to be taken up by developers than their proprietary counterparts. Thanks to AMD’s console dominance (and the next gen consoles featuring the likes of its next-gen Navi GPU)[,] AAA titles [are more likely to turn] to DX12, and in turn DirectML[,] than DLSS and DXR, despite Nvidia’s colossal market share in the dedicated graphics card [market].
 

Dosflores

Reputable
Jul 8, 2014
147
0
4,710
The Radeon VII will support DirectML, which is not a DLSS equivalent. DirectML can be used to do something like DLSS, and lots of other things. It's up to game developers to use it for something like DLSS, and we can only speculate about what its performance would be like without tensor cores.
 
Here go DLSS life spawn... 6 months... congrats Nvidia. No way Nvidia will make this proprietary open source since you need hardware acceleration for doing so and so on because they are too proud... however you will have software options available to developers on consoles with AMD hardware...

Make sense since the RVII is a M150 Instinct. It was made for AI and compute loads.

Thank you Nvidia for another bunch of useless features and the price gouging scam and brainwashing for making people believe RTX is something... when it is nothing special.
 

AnimeMania

Distinguished
Dec 8, 2014
334
18
18,815
Tech like DirectML and DLSS can change the way PC gamers and reviewers evaluate graphic cards. No longer will resolution and frames per second be the standard, but perceived image quality. Maybe graphic cards will start asking you what resolution and fps you want and the video card will make that happen through scaling and artificial intelligence interpolation. This will make it hard to have apples to apples comparison of video cards since the visual parameters can change, especially over time as the cards get better at image recognition. This should give AMD a distinct advantage since they work with many console game developers and those developers have been using these kinds on tactics for years to get the highest framerates and best quality graphics when your hardware is the limiting factor.
 

crystaldragon141

Reputable
Jan 5, 2015
13
0
4,510
Let me start by saying that I am an AMD fanboy so I don't get raked over the coals for what I'm going to say next. The entire premise of this article is misleading. DirectML is not an analogue to DLSS. It is a machine learning api that could be leveraged to create a platform agnostic equivalent to DLSS. Saying that DirectML is an alternative to DLSS is frankly wrong.
 

wwaaacs5

Commendable
Oct 19, 2018
39
0
1,540
depends on its quality, DLSS atm the game iv tryed causes fine details to be washed out, like freckeles on the toons face. it all just become blurry up close. but worse still is it causes thin objects to blink and flash in and out of the image, thats ten times worse then some Jaggies in games nowadays and not worth it since we are moving past 1080. if this teck came out many years ago when AA was still making huge changes because of the resolution of objects, then yes would of been great. but i feel this teck came out way to late to matter.
 

s1mon7

Reputable
Oct 3, 2018
96
4
4,635
The quality of this article is what's the problem. A lot of misguided opinions and unreasonable speculation.

Personally speaking, dedicated proprietary cores are always going to be very hard to push. The way I see it, Nvidia started an arms race with technologies that the industry will try to standardize. You might see Turing's Tensor cores and RTX cores utilized by the large AAA developers at a large expense. Most developers are going to lean towards standards available to anyone. I.e. more likely the console hardware and software will dictate what ends up the standard. While not impossible for Nvidia to win, they are fighting an uphill battle. As soon as similar solutions can be achieved with major open standards that AMD and Intel support, RTX and Nvidia's proprietary solutions are going to become the new Hairworks. Kudos for starting the real time raytracing race though.

While dedicated ray-tracing cores are quite likely to become a standard, it's going to be a standard open to all. At the same time I wouldn't disregard the importance of shader cores, so I can't disagree more with the conclusion stating that it's good that the makers aren't pushing their GPUs in that respect anymore. Neglecting shader cores means neglecting traditional gaming performance and giving us no faster cards. Outside of insanely priced solutions, we don't even have enough shader core performance to push 4k TVs and monitors to stable 60fps in today's titles, let alone what's coming.

Ray-tracing cores or other specialized cores are designed to work NEXT to those cores and serve different purposes. You still need baseline performance, and one of the best ways of achieving it is by improving and increasing the counts of shader cores.
Saying it's nice that we're not getting improvements in those is like saying Intel did a great job by sticking to 4 cores max in the mainstream for almost a decade.
 

Incipient

Reputable
Mar 9, 2017
7
0
4,510


The wording of the article is a bit of a exaggerated or ambiguous as it depends on your definition of "not cost effective". Some may argue that ray traced games at 1080p60 is still "not cost effective" compared to a 4k60 for any traditional lighting model. What I expect the author meant is that while a 2080 can hit 1080p60 on most RTX games, a 1080Ti for example may only achieve 1080p30, or a similar less-than-ideal state, hence being "not cost effective".