Testing Nvidia's Multi-Res Shading In Shadow Warrior 2

Status
Not open for further replies.
I honestly don't know how I feel about this tech. Hey, great. you figured out a way to downsample anything the user isn't directly "looking at". But everyone is different and unique.
 
Might be acceptable in VR if it made the difference between "playable" and "I'm feeling nauseous". But outside of that, no way. Too distracting. Also, it's another proprietary Nvidia technique. Why not just take it all the way and make a vendor-specific API, call it Glide Two.
 
"Why not just take it all the way and make a vendor-specific API, call it Glide Two."

You mean like Mantle? I sure hope you aren't saying that Nvidia would be doing something bad and AMD wouldn't. Nvidia's proprietary tech wouldn't be so popular if game devs had some competency and made their own similar tech.
 
What's worrying/suspicious for this tech is that image quality at or near the screen centre gets degraded, certainly in examples above. This is something the tech is not meant to do.

The principle is great and should be applauded but clearly needs some fine-tuning. We'll be using ideas like this all the time once proper focal rendering becomes the norm - both in VR and non-VR.

There will never be a need to render at the fullest fidelity something outside the operator/gamer's focal area. Apart from spectators/commentators!
 
It's likely that things on the border that might affect the visual appearance in the center could be addressed. (Perhaps by not reducing the processing fidelity for things that emit light or with certain types of reflections -- after all there must have a way of processing them if they're completely out of frame too.)

What I'd be interested in is a blind test of this. In other words, do you notice the sword tip changing during play if you don't know you should look at it?
 
So in otherwards, it benefits 2k and 4k most but it degrades the quality to the point where you might as well just keep it at the lower resolution anyways.

Nvidia need to seriously improve this tech before it can be considered an option.
 
So its basically a crafty mipmap'esque scheme right?

Personally I found that my eye was drawn to the bits of the scene as they go from low to high, or high to low res shading. With the MRS off the stability of the entire scene is obvious. And in the stills it is clear that the degradation of quality starts very close to the viewpoint. That poor tree in the third set of stills is completely screwed up by the technique.

It might make sense in VR, especially with eye tracking and sensible thresholds, but for 2.5D - Nope.
 
The changes are too obvious. In the video my eyes kept jumping to the points in the scene where the shading resolution was changing, much like with good old mipmap where we are all used to seeing trees pop as we come closer. In the stills it was obvious just how badly quality is affected - That poor tree in the 3rd set of stills... Eish...
 


Most of AMD's stuff either starts open source or becomes open source. Mantle became Vulkan. Nvidia's proprietary tech remains closed source and often exclusive to Nvidia hardware or crippled on competing hardware. Also your assessment about game dev competency is only partially true. Nvidia actually pays developers to use GameWorks, for example. Small developers in particular enjoy the cash. So yeah, if you step back and look at the big picture, Nvidia does some questionable stuff on the software side all the time. Their hardware is generally good though.
 
I don't think this is a good idea even if limited to VR. We have eyeballs that move and can look all over the screen. They aren't pinned to the dead center of the screen where they will never notice the detail elsewhere. And as the final graph shows, MRS has a greater benefit the more underpowered your GPU is for the display resolution. "Omg a 27% increase in performance" yup if you're using a 1060 to play at 4k...
 
This in combination with a eytracker like Tobii EyeX could be where this technology shines. If you could only render in high quality where you actually look you could save a lot of power or get higher frame rates.
 


meh it happen on both sides actually. the only difference is AMD usually whining more when they were hit with bad performance.
 
Only if it works so fast you can't tell. You don't want a delay where blurry parts come into focus a moment after you shift your eyes around. That would be REALLY disconcerting. But the theory is interesting, and if they could pull it off it would essentially be free performance. I have my doubts, however.

Example? At least Cool gave an example, though I disagree with him. The development of Mantle spurred MS into action and Khronos too (they absorbed and expanded Mantle into Vulkan so now it is open). The vast majority of these shenanigans are on Nvidia's side. Why? AMD's rendering techniques are open. Nvidia gets full access and can optimize for it to their hearts content. Nvidia's stuff is closed and proprietary. It either works on Nvidia only (such as this Multi-Res Shading and PhysX which runs in software mode on non-Nvidia) or it runs poorly on AMD hardware (GameWorks to varying degrees depending on what they implement). When Nvidia experiences poor performance, it's usually something about the developer's engine. Not some AMD proprietary middleware.
 


there is this misconception that open source will make it running equally better on all hardware. that is not always the case. take DX12 itself. why async compute impact nvidia hardware in negative way? because the feature is mainly build the way it work on AMD hardware. not matter how nvidia try to optimize it async will always result in negative impact on kepler and maxwell. the changes with pascal is just so they don't take performance hit from it. but in general they still did not benefit from it. vulkan while it is a bit better on nvidia the end result we still see the most improvement is on AMD hardware. why? because Mantle first and foremost is develop to cater AMD hardware which what is vulkan based on.

and i said earlier both company are pretty much doing the same thing. only AMD are more vocal about it if something bad happen to them when they also try to do the same in the past. take Dirt Showdown for example. the game use AMD proprietary tech that time called forward engine used in global illumination in the game. if you look at the benchmark at that time the game simply run terrible on fermi and kepler hardware. but you never see nvidia raging to the public about how amd partnered with developer to implement feature specifically to sabotage their performance like AMD accuse nvidia are doing with CDPR. in Dragon Age 2 case for example nvidia was restricted from having the access to the game until the game officially being launch. but nvidia never blame it on AMD as why they are not getting access despite AMD are doing marketing cooperation with bioware/EA for the game.

people said nvidia being jerk or anti competitive by locking PhysX to their hardware only. but many people did not know that in the past nvidia are pretty much open to make PhysX to work natively on AMD hardware. there are group of people try to make it happen. and nvidia was readily to help them. but it is AMD that simply reject the idea and don't want to make it happen.

https://www.techpowerup.com/64787/radeon-physx-creator-nvidia-offered-to-help-us-expected-more-from-amd

http://www.ngohq.com/news/14254-physx-gpu-acceleration-on-radeon-update.html
 
Interesting findings.
I'd personally not use this feature since it's definitely removing more than the value of what it adds.
 
It's a shame about the shortcomings. SW2 is a fun, weird game that suffers from some serious frame rate drops on a GTX 970. I'm sure a lot of people could benefit from this technology if they can improve it.
 
Mantle was open, it transcended into Vulkan that is also open, Nvidia has Gameworks that is not open in any way, TressFX AMD gave access to the code to Nvidia as well, can`t say the same about PhysX or other stuff so don`t compare the two of them like being the same.
 
So failvidia keeps throwing software....
Guess the non Asynchronous hardware support and dx12 FAILS, got them running after their green tails.....
They need to get more game ready software stuff to fool the software makers and the consumers.....
 
If this could be improved some more, this could be a step closer towards rendering different groups of texture objects in different resolutions, instead of just assigning lower resolution to bars at the sides.

However it is not as bad as it appears in these images.
On this site we are comparing pictures directly, by looking at screenshot's.
Nitpicking on those passive photo's.
But it is not meant to be compared like that.
A moving picture already, greatly reduces the awareness of the effect.
When i stopped focussing on it, and just started playing.
I could barely notice it while running and jumping around.
Slicing up monsters, i simply did not have time to even try to notice it anymore.

It can increase FPS by a noticeable enough amount, to almost ignore the small degrade 60% has in Shadow Warrior 2.
It might prove less usefull in different kind of games.
But this game is meant to be played as a fast shooter / slicer.
And i would like to see this option in more of these kind of games during this generation of cards.

They should however tweak it in such a way that the degrade on the center is completely fixed, before it really works as intended.
 
Status
Not open for further replies.