[SOLVED] Do games look different on AMD vs Nvidia?

Oct 13, 2021
1
0
10
I've got a question regarding graphics on different GPUs compared to one another. Textures and models I take it look the same in games but what about hair physics and other details?

I've found an old topic on Google that says that, ignoring performance all together, there are no visible difference running AMD vs Nvidia with feature parity.

Why then do competitors promote their own special water, hair, clothing, and other physics? Is it just the fact that they developed it?

I've never owned an AMD card, but am hovering the purchase button right now. I just want to make sure there are no tradeoffs in games when it comes to character models, grass, water, and other details since that is a big deal for me, much less RTX and all that jazz which I don't really care for.
 
Solution
I've found an old topic on Google that says that, ignoring performance all together, there are no visible difference running AMD vs Nvidia with feature parity.
There may be some image quality differences as pointed out in this video (also the thumbnail image is super clickbaity :rolleyes: )

But I'd argue most of them are so minute that this is just fanboy ammo. You're probably not going notice anything grossly off. However, some games may adjust the image quality basd on the specs of the hardware. For example, Gears of War 4's "Ultra" quality isn't the same across the board, it depends on the video card.

Why then do competitors promote their own special water, hair, clothing, and other physics? Is...
I've found an old topic on Google that says that, ignoring performance all together, there are no visible difference running AMD vs Nvidia with feature parity.
There may be some image quality differences as pointed out in this video (also the thumbnail image is super clickbaity :rolleyes: )

But I'd argue most of them are so minute that this is just fanboy ammo. You're probably not going notice anything grossly off. However, some games may adjust the image quality basd on the specs of the hardware. For example, Gears of War 4's "Ultra" quality isn't the same across the board, it depends on the video card.

Why then do competitors promote their own special water, hair, clothing, and other physics? Is it just the fact that they developed it?
Yes. A lot of it is a marketing ploy designed to make people believe the "best" experience can only be attained by using their cards. But for the most part, with the exception of DLSS, and assuming you're using an up-to-date cards, none of the technologies promoted actually require a certain manufacturer's hardware.

Also pre-empting a question because I'm sure someone will point it out:
PhysX is infamous because it has a CUDA implementation, meaning it can run on NVIDIA GPUs and only NVIDIA GPUs. However, most games that PhysX don't use this implementation for anything other than additional, optional cosmetic features. And the list on Wikipedia that specifically calls out games that use hardware accelerated PhysX shows that there's not a whole lot that do.

But even then, I would argue that the CPU implementation of PhysX is fast enough for a lot of eye-candy anyway. Final Fantasy XV used NVIDIA's Gameworks development suite on a console that only has AMD hardware.
 
  • Like
Reactions: Metal Messiah.
Solution

InvalidError

Titan
Moderator
assuming you're using an up-to-date cards, none of the technologies promoted actually require a certain manufacturer's hardware.
While they may not require vendor-specific GPUs, they do benefit sometimes quite heavily from vendor-specific hardware optimizations. Intel's XeSS for example is optimized for Intel's Xe architecture and it may take many generations for AMD and Nvidia to tweak their architectures to handle Intel's XeSS algorithm with little to no performance penalty should Intel and its suite of Xe graphics enhancements succeed in gaining market share.
 
While they may not require vendor-specific GPUs, they do benefit sometimes quite heavily from vendor-specific hardware optimizations. Intel's XeSS for example is optimized for Intel's Xe architecture and it may take many generations for AMD and Nvidia to tweak their architectures to handle Intel's XeSS algorithm with little to no performance penalty should Intel and its suite of Xe graphics enhancements succeed in gaining market share.
Looking into it, I don't think that would be the case. The "other people can use it part" is simply because XeSS has a "software" implementation path. It's basically like how DX12 RT has a fallback layer for GPUs with non RT hardware to run RT rendering. If anything, NVIDIA and AMD just have to work out what things XeSS is asking a "matrix math unit" to do and translate that to poke at whatever hardware units they have. And I don't imagine the XMX portion of Xe is really any different from NVIDIA's tensor cores.
 

InvalidError

Titan
Moderator
If anything, NVIDIA and AMD just have to work out what things XeSS is asking a "matrix math unit" to do and translate that to poke at whatever hardware units they have. And I don't imagine the XMX portion of Xe is really any different from NVIDIA's tensor cores.
Unless existing hardware already has everything required to achieve practically 1:1 translation from XeSS to whatever else, there will almost certainly be a significant performance penalty.

DX12 may have a fallback path for DXR but what is the performance? Around 10 FPS. Pretty much unusable except maybe for turning RT on for screenshots and generating reference images when comparing different vendors' implementations or development/debugging purposes.
 
Unless existing hardware already has everything required to achieve practically 1:1 translation from XeSS to whatever else, there will almost certainly be a significant performance penalty.
The only hiccup there I would see is if XMX uses a larger data format and XeSS also uses a larger data format, since Tensor Cores are FP16. But otherwise they're both matrix math units. As long as, at a high level, they perform the same function, performance loss if any should be minimal.
 

mamasan2000

Distinguished
BANNED
One thing stood out to me going from AMD Vega56 to a Nvidia 2080. In Sniper Ghost Warrior Contracts 2, shadows never work right if I am hiding behind box. There is supposed to be a shadow behind the box but when I go there, it becomes fully lit. Other thing I noticed in that game was a ceiling texture would flicker all the time. Never had those problems with Vega56.

One thing I am curious about. Jensen Huang said they would introduce color compression, even more aggressive version. Wouldn't that mean information is lost? If you are going to gain 20-30% more performance, I don't see how you can do that losslessly.
Does AMD do the same?

On a sidenote, I feel there is too much focus on streaming. Percentage-wise, streamers aren't probably even 1%. To me it's like focusing on 3-way SLI/Crossfire. Why bother?
 

InvalidError

Titan
Moderator
The only hiccup there I would see is if XMX uses a larger data format and XeSS also uses a larger data format, since Tensor Cores are FP16. But otherwise they're both matrix math units. As long as, at a high level, they perform the same function, performance loss if any should be minimal.
There is more than the raw matrix unit size, you also have all of the support processing and data un(packing)involved in getting data ready and using results afterwards. XeSS could have a couple of instructions dedicated to setting everything up and using the results that don't have direct equivalents in AMD and Nvidia hardware, burning through a lot more FP/INT ALU cycles.

We'll see once the GPUs launch and software that supports XeSS comes out to benchmark with. My hunch is that Intel wouldn't make much of a fuss about it if it didn't give them a significant advantage against competing hardware for at least a few years.
 

Karadjgne

Titan
Ambassador
There is more than the raw matrix unit size, you also have all of the support processing and data un(packing)involved in getting data ready and using results afterwards. XeSS could have a couple of instructions dedicated to setting everything up and using the results that don't have direct equivalents in AMD and Nvidia hardware, burning through a lot more FP/INT ALU cycles.

We'll see once the GPUs launch and software that supports XeSS comes out to benchmark with. My hunch is that Intel wouldn't make much of a fuss about it if it didn't give them a significant advantage against competing hardware for at least a few years.
Agreed. Intel (apart from the fan boys) has taken a serious beating in gamers opinions over the last few years, not too much different than the beating amd took with the FX cpu line. You could wager that in order not to add fuel to that fire, Intel Has to bring its A-game, and then some.

Yes and no on the differences. For straight up visuals, nobody could stand in front of a couple of monitors and say which was amd and nvidia driven. But thats the basics. Looking at a game like Minecraft which looks vastly different with Ray Tracing enabled, picking out the nvidia card isn't so hard. Something like Cyberpunk, as detailed and optimized as that is, even for FSR, the differences are so minute you need still frames to see them, not going to happen.

The real differences between nvidia and amd aren't in the visuals. It's in the FPS. And that's highly dependent on what game, what pc, what gpu, what settings, what after effects.