News Microsoft Explains GPU Hardware Scheduling: Aims to Improve Input Lag

EtaLasquera

Reputable
Nov 24, 2019
7
1
4,515
R5 1600AE Stock
GTX1060 3GB
16GB RAM 2400Mhz
1TB NVME
1% Improvement with hardware scheduler.

This is low or mid tier cpu?
 
hmm I should try this on my i3-4130.

dual core so it should have a larger impact correct?



also does this just lessen burden on cpu (so doesnt use as much %?) and push it to make ur gpu do a bit extra while gaming?
 

bit_user

Polypheme
Ambassador
This feature helps latency, not frame rate.
Not sure about that.

What MS' blog post seems to say is that in order to achieve better efficiency, games were coded in a way that increased latency. Switching the GPU scheduling to use more hardware assist doesn't change that, but it could reduce the efficiency benefit of batching, which would enable games to submit work more frequently, thereby reducing latency. In other words, it opens the door to latency-reductions in games, though the game is what ultimately determines whether any benefit is realized.

Here's the relevant bit:
an application would typically do GPU work on frame N, and have the CPU run ahead and work on preparing GPU commands for frame N+1. This buffering of GPU commands into batches allows an application to submit just a few times per frame, minimizing the cost of scheduling and ensuring good CPU-GPU execution parallelism.

An inherent side effect of buffering between CPU and GPU is that the user experiences increased latency. User input is picked up by the CPU during “frame N+1” but is not rendered by the GPU until the following frame. There is a fundamental tension between latency reduction and submission/scheduling overhead. Applications may submit more frequently, in smaller batches to reduce latency or they may submit larger batches of work to reduce submission and scheduling overhead.

That CPU is pretty powerful for that GPU, so that's probably another reason why nothing changed.
If my interpretation is correct, then any potential improvement would be specific to the work-submission behavior of the game. Games written to work efficiently with CPU-based scheduling are unlikely to show much benefit from HW-assisted scheduling. So, to see benefits, you'd need either:
  1. A badly-written game.
  2. A game written to favor low-latency more than high-FPS.
  3. A game which has specifically been tuned to take advantage of HW-assisted GPU scheduling.

Again, this is just my interpretation of the MS blog post. I clicked on it in hopes of learning more details, but the main nugget is just this:
Windows continues to control prioritization and decide which applications have priority among contexts. We offload high frequency tasks to the GPU scheduling processor, handling quanta management and context switching of various GPU engines.
 
Not sure about that.

What MS' blog post seems to say is that in order to achieve better efficiency, games were coded in a way that increased latency. Switching the GPU scheduling to use more hardware assist doesn't change that, but it could reduce the efficiency benefit of batching, which would enable games to submit work more frequently, thereby reducing latency. In other words, it opens the door to latency-reductions in games, though the game is what ultimately determines whether any benefit is realized.

Here's the relevant bit:


If my interpretation is correct, then any potential improvement would be specific to the work-submission behavior of the game. Games written to work efficiently with CPU-based scheduling are unlikely to show much benefit from HW-assisted scheduling. So, to see benefits, you'd need either:
  1. A badly-written game.
  2. A game written to favor low-latency more than high-FPS.
  3. A game which has specifically been tuned to take advantage of HW-assisted GPU scheduling.
Again, this is just my interpretation of the MS blog post. I clicked on it in hopes of learning more details, but the main nugget is just this:
Actually the important part is this:
However, throughout its evolution, one aspect of the scheduler was unchanged. We have always had a high-priority thread running on the CPU that coordinates, prioritizes, and schedules the work submitted by various applications.
Because windows will outright stop already running tasks (like the game threads for example) to run a higher priority task and it doesn't care if there are enough resources it's just how windows works,so if this can make the driver run at a lower priority we will get much less stutter in games.
https://docs.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities
If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice) and assigns a full time slice to the higher-priority thread.
 

bit_user

Polypheme
Ambassador
Because windows will outright stop already running tasks (like the game threads for example) to run a higher priority task and it doesn't care if there are enough resources it's just how windows works,so if this can make the driver run at a lower priority we will get much less stutter in games.
That's only an issue if the game is heavily trying to use all cores. Of course, on low core-count CPUs, that might actually be the case.

However, I think it ties in with what I was saying - games probably avoid submitting work too frequently, specifically because doing so would unblock this high-priority thread, which then interrupts their frame computations just to forward the work onto the GPU.
 

wifiburger

Distinguished
Feb 21, 2016
613
106
19,190
Other sites also noted not much gains with i9 but on the 3900x showed gains.
Tomb Raider showed 1% gains at 4k for me.
I kept it on, since the cpu seems to reach / hold better boost clocks under gaming with it on.
 
  • Like
Reactions: bit_user