gamerk316
Glorious
-Fran- :
gamerk316 :
-Fran- :
gamerk316 :
-Fran- :
gamerk316 :
Except in the case of quads, it's multithreading of the GPU driver that's killing performance. Before DX12, you could do 90% of the work on just two threads due to the GPU rendering thread being mostly single-threaded in nature. With DX12's changes to the driver, 90% of the work is done in about six or seven threads due to the ability to thread out the driver. Yeah, quads are going to have problems with that, even if each thread isn't doing that much work, due to the latency involved in thread switching.
But hey, you got "better" threading out of it.
But hey, you got "better" threading out of it.
You lost me a bit, gamerk. Are you against or in favor of going wide with software? Particularly drivers.
You make it sound like you're against by these sort of comments, but I have trouble understanding why would that be the case...
Cheers!
I'm just pointing out that using more threads has the side-effect of causing a lot of latency related issues on processors with less cores. If you are going to have a GPU driver that uses 4+ major threads instead of just one, yeah, quads aren't going to cut it anymore, and even 4+4 CPUs are going to struggle a bit.
Noted. I do believe shifting the processing bottleneck is not a bad thing as long as you still have room to grow. If the modern graphic APIs allow drivers to "go wide", it's better than expecting Intel and AMD to increase IPC to accommodate your software deficiencies.
Cheers!
I now point out using more threads in games is actually going to make the above problem a LOT worse. Getting into a situation where you have over a dozen threads that all want to do significant (measurable) work fighting for a handful of CPU cores is going to lead to a lot of hard to reproduce performance loss. We're going too far in the other direction, and getting into a situation where 16 cores is quickly going to be considered requisite for gaming. And that's going to cost everyone more money to maintain the same performance.
Uhm... Since ST performance is not moving up as fast as before, I think it's the other way around. I do agree, if you have games that peg all available CPU cores 100%, you'll get into a nasty situation, but it's the same as having the same game pegging your single core CPU at 100% leaving no room for the driver to maneuver. Maybe you're still thinking of the driver just pegging a single core to 100% while the games use the rest of the cores. That can be done as well, but sounds less optimal than making the driver go as wide as it can and leave the OS scheduler to handle the assignments.
Cheers!
I'm not talking about core loading, but about thread latency.
Yes, the "many threads at 100%" is a problem, as it has always been a problem, but outside of benchmarks or really poorly coded programs should never happen.
The problem I'm very concerned about is the following: Take a classic i5 (four physical cores). Right now, i5's are more then capable enough of maxing out pretty much every game under the sun. But that's mainly because there are only two to three really time critical threads the CPU needs to run at any one point in time. Other threads, even within games, aren't that sensitive to time and won't really effect performance much.
Now take your previously single-thread GPU driver and make it fully multithreaded (let's assume six small threads here each doing about the same amount of work). Remember: Even if the total workload is the same as the classical GPU driver, you still can only run four threads at any one point in time on an i5 class CPU. So you now created a situation where not only are parts of the GPU driver going to stall out due to lack of CPU cores, there's also a very real chance the GPU driver will actually interrupt the program you are running due to the driver layer having a higher scheduling priority then the application you are trying to run. While this effect won't be seen much in terms of FPS (remember, 16ms is an eternity for computers), it WILL show up in latency.
So, how do you fix this? Obviously, purchase a CPU with more processor cores. That costs several hundred dollars more. To achieve the same performance as the classical GPU driver [because multithreading for the sake of multithreading adds zero performance].
So yeah, I have some concerns.