News Nvidia Takes a Shot at AMD's 'Sub-Par Beta Drivers' for GPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I thought they decided that their GPU architecture would not benefit from that? I remember the whole reason they spent money on Mantle was that the way they designed that current GPU made it very tough to multithread pre-DX12 APIs universally, or something along those lines.
Quite the opposite: DX11 initially was launched with the assumption of single-threaded operation (because it was written in the early days where dual-core CPUs were a whizz-bang new thing). It was later extended to support multithreaded scheduling. Nvidia implemented this, AMD did not. AMD instead went for a fully hardware scheduling solution... which only supported DX12 (and Mantle, which is why they tried to shove that out whilst DX12 was in development). Which is great for DX12, but for everything running in DX11 - which remember, remains in active development along with DX12, because a low-level API is not appropriate everywhere and a high-level API is still needed, DX12 is not simple "one better" than DX11 - that extra development just never happened, so to this day there's a performance disparity.
An article from Intel of all places digging into the weeds of driver threading.
This has been the case since TWIMTBP days or earlier, where nVidia would invest a ton of money with "partnerships" with developers and basically leave AMD with no room but try to tweak via drivers way after a game's release.
What AMD could do is what Nvidia does (and has been doing for well over a decade): fund a large internal development support team to work with external developers. There's not some nefarious "make AMD go slow" scheme, simply that Nvidia puts in the time and effort to work on support and optimisation, whereas AMD does not.
When you encounter an issue with an Nvidia device/driver (and after the usual basic idiot-check flowcharting), you often end up with a dev assigned specifically to work with you to figure out the problem and implement the fix - and that fix may be on Nvidia's end, or it may be "here, add this code to your game" (i.e. Nvidia literally optimising your game on the game code side).
When you encounter a similar issue on the AMD side, you either get a link to the documentation (which may on a very rare occasion actually be relevant), or "it's open source, look at the code". At best you may get occasional access to a dev via a game of telephone with some support staff.
 
  • Like
Reactions: blppt and magbarn
What AMD could do is what Nvidia does (and has been doing for well over a decade): fund a large internal development support team to work with external developers. There's not some nefarious "make AMD go slow" scheme, simply that Nvidia puts in the time and effort to work on support and optimisation, whereas AMD does not.
When you encounter an issue with an Nvidia device/driver (and after the usual basic idiot-check flowcharting), you often end up with a dev assigned specifically to work with you to figure out the problem and implement the fix - and that fix may be on Nvidia's end, or it may be "here, add this code to your game" (i.e. Nvidia literally optimising your game on the game code side).
When you encounter a similar issue on the AMD side, you either get a link to the documentation (which may on a very rare occasion actually be relevant), or "it's open source, look at the code". At best you may get occasional access to a dev via a game of telephone with some support staff.
Off topic, but there's already documented evidence how some games (even engines), have had "if nVidia then use shader else do_crappy_computation", so I'm not so sure there. Some optimizations that could help AMD or even Intel are not used when nVidia developers "help". I haven't read or seen anything on that front from the AMD partnerships, but I guess there's some of that as well. Assassin's Creed is one weird example. But, again, this is off a tangent to the "bugs". As for what you mention second, I have never had to go into the support for neither nVidia or AMD. And I've had been using AMD for almost 20 years (Radeon 8500 64MB DDR was my first ATI card) and swapping from time to time into nVidia only to be mega disappointed with their TV and monitor support until HDMI was widely adopted; well, even then they had issues with some TVs.

Also, good point on the Hardware Scheduler on AMD's side. nVidia implements their GPU scheduling in the drivers, so they depend on CPU grunt first, specially on older titles (DX9 is a wash).

Regards.
 
What AMD could do is what Nvidia does (and has been doing for well over a decade): fund a large internal development support team to work with external developers. There's not some nefarious "make AMD go slow" scheme, simply that Nvidia puts in the time and effort to work on support and optimisation, whereas AMD does not.

That part I disagree with. Nvidia has shown over the past 2 decades that they'll push proprietary tech like Hairworks, Physx, etc on developers which either won't run at all on AMD cards or with huge penalties in performance. Is it really out of the realm of possibility that the "TWIMTBP" branded games have significant Nvidia performance benefits in the game code?
 
WHQL is a formal qualification test from Microsoft that every driver must go through, regardless of what it's for.

I think that makes them objectively better than beta drivers.
To get this "WHQL" stamp you need additional pay to MS (at least, it used to be this way last time I checked it). So non-WHQL just means the company decided not to pay MS for every release, not that they don't pass those tests.
 
The situation is reverse on Linux.

Nvidia's drivers are a bovine excrement show. AMD's (And also Intel's) drivers are the real deal and usually you can tell (overlap this with a Venn diagram) users who 1) the new users to Linux having huge problems and at the same time 2) they (of course!) have an Nvidia card.

If you want a perfect experience on an Nvidia card on Linux, well, your best bet is to use some 2 year old distro and lag behind everybody. Then you'll have a great experience. Cause Nvidia be late to supporting new features.

Both SDL and Ubuntu are issuing feature rollbacks right now specifically because of Nvidia's ultra-slow driver work.

My only problem with the Nvidia proprietary drivers is that for some reason, its not quite as snappy in Gnome/Ubuntu compared to the open source AMD drivers, something I've never been able to mitigate.
 
That's the problem here is that the 3060ti which is supposed to be worse performing than the 6700xt runs better than the 6700XT as it's an unoptimized DX11 game.
Really wish Squeenix would get their act together and upgrade FFXIV to DX12 or Vulcan, but it's not happening anytime soon. There's also no DX12 wrapper for FFXIV. Ironically there's a great Metal wrapper for FFXIV on my MacBook Pro M1Max that runs great....
I've been trying to get a 6800 or 6800XT for my wife's computer for a decent price, but at this point 3060Ti's are much easier to come by and perform well enough at 4K for her FFXIV gaming.

Have you tried using dxvk on windows? Sometimes it works; sometimes it doesn't (on windows). But when it does, it can provide a near dx12 wrapper boost.
 
That part I disagree with. Nvidia has shown over the past 2 decades that they'll push proprietary tech like Hairworks, Physx, etc on developers which either won't run at all on AMD cards or with huge penalties in performance.
'PhysX' still causes a huge amount of confusion:
PhysX as implemented in the majority of games (and PhysX still has a good share of the physics engine market, if not the majority, due to being the default engine for most of the life of UE4) is a CPU-based physics engine.
GPU PhysX is a small subset of additional effects that can be GPU (or FFPU, originally) accelerated, on top of all the normal CPU computed physics. These effects have always ended up being limits to simple cosmetics (e.g. 'more flappy banners! more particles!') due to being a limited subset of physics engine functions.
Regular old PhysX will run identically with an Nvidia GPU present, an AMD GPU present, or indeed no GPU present at all, because it runs on the CPU. It's also been open-source for the last 4 years.
Is it really out of the realm of possibility that the "TWIMTBP" branded games have significant Nvidia performance benefits in the game code?
Far from it, it's the entire point. But the difference is that it is implemented by Nvidia providing tools or code that makes their GPUs perform better, not by making other GPUs perform worse.
The alternative would be demanding Nvidia help optimise games for AMD cards, which is not a reasonable position. That's AMD's job, after all, even if it takes quite some time for them to get around it it (the 'fine wine' effect).
 
That is where the 6700XT performance lands at 4K most of the time. It's not a "driver issue". Look at the 6800 and how it stands: it has a bigger Cache than the 6700XT and it shows. At higher resolutions the effective bandwidth difference between the cards makes a big difference.

Don't go blaming the tires because the clutch was burned and wasted.

Also, use DX12 or Vulkan wrappers for DX9/10/11 games if you can find them. It improves everything for both nVidia and AMD, by a lot. I use it for GuildWars2 and it's like magic.

Regards.

Less so, I find now that GW2 has FINALLY added DX11 support. DXVK does still help a little on my AMD card (with DX11 enabled as well), but nowheres near how big the difference used to be.
 
Regular old PhysX will run identically with an Nvidia GPU present, an AMD GPU present, or indeed no GPU present at all, because it runs on the CPU. It's also been open-source for the last 4 years.

I am well aware of that. You are apparently unaware that Nvidia (or the company Nvidia bought) intentionally crippled the CPU path for CPU Physx making it perform pathetically on a system without GPU acceleration.

CPU physx is just about useless.
 
Having used both the RX 6800 XT and 6600 XT, I don’t find AMD drivers problematic. There may be some bugs here and there, but it wasn’t a deal breaker for me.
Having said that, if the rumoured power draw for next gen GPU is true, I think AMD can easily take a jab back at Nvidia for their space heater products. 😛
 
  • Like
Reactions: jacob249358
I am well aware of that. You are apparently unaware that Nvidia (or the company Nvidia bought) intentionally crippled the CPU path for CPU Physx making it perform pathetically on a system without GPU acceleration.

CPU physx is just about useless.
This is a result of that exact confusion I mentioned. PhysX runs just fine on any given CPU, be it PC or console (yes, PhysX runs on consoles with AMD APUs, like it does on any other CPU). The very small number of fancy GPU-accelerated PhysX functions (fancier cloth, extra particles) do not run well if you try and force them onto CPUs, but those GPU-accelerated functions are basically never implemented. A handful of games added them when they were first unveiled, and that's pretty much it.
But only the fancy GPU elements commonly get branded as "PhysX", so the general gaming public completely ignore the actual physics engine portion of PhysX and only see fancy cosmetic effects portion.
 
  • Like
Reactions: renz496 and blppt
Oh that's rich considering I just spent hours trying to figure out why I was getting stuttering in games, other games taking a minute to recover from being all tabbed, VMware workstation having CPU crash errors and having 20 FPS performance in VM and in the end it was that new Nvidia limit background FPS option which in theory would be great if it actually worked. I turned it off and everything works now. Hilarious!