News Tested: Nvidia GPU Hardware Scheduling with AMD and Intel CPUs

I ran Cinebench R15 with it on and off, same settings otherwise...... system specs below
With it OFF 125.24 fps
With it ON 119.17 fps

Just for the laugh I then ran Userbenchmark........ settings as above
On the final Globe with it OFF 392 fps
On the final Globe with it ON 403 fps

Looks like another gimmick that change little or nought....
 
I ran Cinebench R15 with it on and off, same settings otherwise...... system specs below
With it OFF 125.24 fps
With it ON 119.17 fps

Just for the laugh I then ran Userbenchmark........ settings as above
On the final Globe with it OFF 392 fps
On the final Globe with it ON 403 fps

Looks like another gimmick that change little or nought....
I don't know about 'gimmick' so much as it's a setting that may or may not affect performance, depending on how the game or application has been coded. Best-case, maybe a few percent higher performance.

FWIW, testing with the Cinebench graphics test and UserBenchmark are both pretty meaningless as far as real graphics performance goes. Not that you can't use them, but they don't correlate at all with most real-world gaming / graphics workloads. UserBenchmark's GPU test is way too simplistic, while Cinebench doesn't scale much beyond a certain level of GPU.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
I ran Cinebench R15 with it on and off, same settings otherwise...... system specs below
With it OFF 125.24 fps
With it ON 119.17 fps

Just for the laugh I then ran Userbenchmark........ settings as above
On the final Globe with it OFF 392 fps
On the final Globe with it ON 403 fps

Looks like another gimmick that change little or nought....

ITs nice to see 9fps increase for final Globe. But since its already at almost 400fps, we are looking at a little over 2%. Its good if its consistently 2% faster. Too bad some results went south instead.... AS you said, change little....
 
Look mine was not a scientifically based test, just a quick run of something that many may have on their PC, that is quite reasonably what many might use to see if it helps them on their everday normal setups. I played Mordor Shadow of War, Withcher 3, Chess Ultra and TheHunter Classic (11 year old game with still passable graphics) and I saw no difference in FPS or FPS stability. So for me it is a another questionable add-on resource eater or just released before any hardware/software can really use it.
 

razor512

Distinguished
Jun 16, 2007
2,134
71
19,890
Can you test a worst case scenario to see if the scheduling helps. Benchmark it on a GTX 970 where you have 2 memory pools (3.5GB and 512MB). Ignore the age of the card and focus on the how the scheduling can react to cases where there is naturally a ton of overhead in scheduling.
 
Can you test a worst case scenario to see if the scheduling helps. Benchmark it on a GTX 970 where you have 2 memory pools (3.5GB and 512MB). Ignore the age of the card and focus on the how the scheduling can react to cases where there is naturally a ton of overhead in scheduling.
I actually already did that with GTX 1050, though not on the CPU side. Per Nvidia, you must have at least a Pascal GPU, which means GTX 1050 or above (because I do not count GT 1030 -- it's a junk card no one should buy!) GTX 970 unfortunately is Maxwell, so it will have to rely on non-HW scheduling.
 
[/quote]Or perhaps I should have dug out a slower CPU or disabled some cores and threads.
[/quote]
That was my first thought when you mentioned testing with the 1050, but nothing about testing another processor. It could be that both of these processors are too fast, or have too many threads to show any difference. Perhaps there's more of a performance difference when the game is starved for CPU resources, such as can be seen in some titles with a four-threaded processor, like Battlefield V. After all, it sounds like this feature is moving VRAM management to be handled by the GPU, so there could possibly be gains on the CPU side of things. It could be worth testing a few games that are demanding on the CPU after swapping in something like a Ryzen 1200 or an Athlon, or maybe just disabling some cores and SMT and cutting the clock rate back to achieve similar results.

Another thought is that if the benefits are entirely on the VRAM management side of things, a 2080 Ti wouldn't likely see much benefit due to it having more VRAM than current games require, especially at 1080p. The performance benefits may appear in situations where the VRAM is getting filled, and data is getting swapped out to system memory. If the card can make better decisions about what data to keep in VRAM and what to offload, that might be where the performance benefits are. You did test a 1050 with just 4GB of VRAM, but perhaps a 1050 isn't fast enough for this to make much of a difference, especially considering its not even managing 30fps at the settings used in most of these tests. Another site I just checked only tested a couple games including in Forza Horizon 4, but showed around an 8% performance gain in both when using a 1650 SUPER paired with a 9900K, but no tangible difference to performance with a 2080 Ti, so that might be the case. Of course, that wasn't a site I would actually trust for the accuracy of benchmark results. : P
 
Another site I just checked only tested a couple games including in Forza Horizon 4, but showed around an 8% performance gain in both when using a 1650 SUPER paired with a 9900K, but no tangible difference to performance with a 2080 Ti, so that might be the case. Of course, that wasn't a site I would actually trust for the accuracy of benchmark results. : P
Yeah, that's the thing: if you only run the game benchmark ONCE and then move on, you're going to get a lot more noise. I specifically ran each test three times to verify consistency of results (because the first run is almost always higher, thanks to the GPU being cooler and boost clocks kicking in more). Given the results so far, however, I don't want to put a bunch more time into testing, which is why the article says "I'll leave that [CPU testing] to others."
 
After all, it sounds like this feature is moving VRAM management to be handled by the GPU, so there could possibly be gains on the CPU side of things.
The thing here would be that game benchmarks are already perfectly RAM managed by the developers of that benchmark themselves (at least that's how I imagine it) while the probably pretty generic hardware accelerated scheduling won't be able to beat that.
The question is if it does anything outside of the canned benchmark where devs can't optimize each and every scene to perfection.



Also running a 2Gb video card at ultra and expecting anything other than a huge vram bottleneck is pretty funny.
 
The thing here would be that game benchmarks are already perfectly RAM managed by the developers of that benchmark themselves (at least that's how I imagine it) while the probably pretty generic hardware accelerated scheduling won't be able to beat that.
The question is if it does anything outside of the canned benchmark where devs can't optimize each and every scene to perfection.

Also running a 2Gb video card at ultra and expecting anything other than a huge vram bottleneck is pretty funny.
It depends on the game, and RDR2 was tested at 1080p medium on the GTX 1050. Certainly, there are devs who do a decent job at VRAM management. There are also devs that don't do a good job -- and DX12 / Vulkan games require the devs to put in more effort to optimize. Most DX12 games do poorly on the GTX 1050, regardless of settings, because they're designed with the expectation of 4GB or more. (2GB AMD cards also have big issues with some DX12 titles -- though Forza Horizon 4 isn't one of the bad games for memory.)
 
Jun 27, 2020
1
0
10
I believe your editorial is flawed. It's looking for the wrong metric. You should be evaluating loading times and assets pop in for memory scheduling. Not gameplay fps. Normally in a game, once it's loaded the only thing that will affect performance is bandwidth and speed of the memory. This API scheduler does not improve either of those.
 

kinney

Distinguished
Sep 24, 2001
2,262
17
19,785
Results may also be affected by WDDM 2.7 and that change should be accounted for by testing 1909 vs 2004, but I think HWS needs tested with more intense gaming setups. Large or triple LCD displays (with and without NV Surround) and VR. I think you'd have to be pushing more pixels to gleam benefits here.

My conclusion from this article is that the fact there's reproducible performance variation at all (and at 1080P at that) proves that this is going to continue to improve and be a performance gain. Enable it and keep up on your GameReady drivers.