Will DX11 help end MicroStuttering?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Unfortunately or not SLI/Crossfire are here to stay.

I can't be sure, i suppose noone can, but I don't think they push SLI/Crossfire in order to sell more gpu's (at least not exactly). They push it becuase some people need/want that power and Nvidia/ATI are unable to cost effectively produce a GPU that can perform as fast as top tier SLI/Crossfire, so to still have products for that market they have SLI or corssfire.

While most people won't care, there are a few of us who want to 'improve on the best' and crossfire/sli is the only way to do that. I mean, consider how rediculously expensive a single GPU based on current tech would have to be in order to be as powerful as the X2 cards. It just doesnt make sence to produce such a high end tech that only a few will buy when they can produce mid high end gpu's and sell two to the high end instead of some super GPU that is useless for everyone but the enthusiasts.

They are in the business of making money afterall. I mean, I'd love to have a single card as powerful as my double 4890's.. but it just isnt worth the R&D to make a card that massive only to sell it to a fraction of the market at a price even fewer would want to pay. Even now we see lower GPU's in pairs outperforming higher end gpu's for less money. If such a mythical supr single gpu card was around, say a 4990, and it was indeed as fast as a 4890 crossfire it would cost easily 2.5 times as much as the crossfire alternative. (two 4850's are a good example of this. they easily out pace a 4890 but cost significantly less in a pair than the 4890's are single)

It is all about the bottom line. If one does not want a SLI/crossfire rig then there is no need to buy into one. But I don't think we will ever see a single GPU card aimed at the enthusiast sector again. Just costs too much for the user and for the developement. We are, for years to come, going to live in a world where the most powerful GPU's are equivalent to the 275/4890 and enthusiasts will just have to buy 2, 3 or 4 of them to get the most out of it.

Now, back on topic.. I have had 3 sli or crossfire setups (1 sli, 2 crossfire). I have never in my time with these rigs seen anything like the microstuttering you see on the videos on the net. I don't know if I am lucky, or if it is indeed all about a balanced system.. But I don't even consider it when I make a purchase, it simply is a non issue to me, while it may be for others.
 
Im of the opinion that MS is seen by only certain people.. Of course, first it has to exist on those peoples rigs, but also, just like some cant stand even the display rate at 60 cycles, so too do these people see MS. Id guess youd put it down ro an unfortunate ability heheh.
Now, having a poorly balanced system will obviously make MS occur at a much higher level, and also, be detectable by most people as well. Thats just MHLO tho
 
Got to say what gamerk316 is saying does make more sense to me know i have had some time to think it over.
If it was hardware based, involving the CPU then in a lot of cases simply turning on V-sync would cure it. Also from a CPU point of view you are just standing still to go forwards anyway. If it allows more threads to be run on the hardware and enables MT with the CPU then you are getting more threads feed from more sources, which is the same difference, isn't it.
Its much more likely from what i know of it that its something to do with the rendering timing/process like Gamerk316 is saying.
It could all come down to poorly coded software in the end of the day, a Console port that hasn't been tuned to allow for the difference in Hz between the set plodding along of the Console would cause juddering graphical anomalies which could easily be compounded by throwing dual cards at it.
That's just one of several coding issues that would cause issues similar to MS and thats before we even get to the hardware.

Mactronix
 
Its been shown that the timing is thrown off. If its pci, then well know when the i5s come out, as the response time will be cut quite a bit. But, again, reread what I posted of the Toms article. The string is loaded, the gpus are idling, or every once in awhile, one is anyways. Toms states this. It wasnt Toms article that got me thinking this either, it was to show what I meant, and was trying to say, in a better form.

As to plopping in a old P4, this is where it changes, even for them. In essence, it is the OS thats going to change things, or more precisely, the DX, with Vista/W7. If it were simply the data flowing thru the pci lanes, then wed see it all the time, wouldnt we, being, if the pci lanes couldnt keep up?

Some accesses take longer than others, and if the strings full, theres going to be lulls on the other end, and choppiness. Change that scenario, and the cpu is handling everything in a much more timely manor. Its the way the data stream is currently constructed and delivered vs whats to come.

You yourself are of the notion that MT in games exist, but while that may be true for some things, heres where I disagree with you:

You said : "More and more, we are seeing games taking advantage of more cores, yet even in quad optimized games (Lets use GTA:IV) we see MS."

While I tried to say what Toms article said much more clearly:


""Multi-threaded rendering? "But," you’re saying, "we’ve had multi-core CPUs for several years now and developers have learned to use them. So multi-threading their rendering engines is nothing new with Direct3D 11." Well, this may come as a surprise to you, but current engines still use only a single thread for rendering. "


You also said

" Also remember, even in non-threaded apps (there are very few "single threaded" apps), the second core will be used simply due to the way the OS handles the data. "


While Toms says

"But as you can see, a large share of the workload was still on the main thread, which was already overloaded. That doesn’t ensure good balance, needed for good execution times. "

So, where are these very few apps, single threaded apps?
 
Everything has to be sorted and presented. If theres a long line being moved very quickly, theres bound to be anomilies in that line, or stream, as theyre not all the same, and all dont require the same resources.

Add more lines, moving at the same speed, a much more even flow occurs. CF/SLI cards are only doing half as much work for the same workload. Im not saying in the end it couldnt be drivers, Im just trying to present a case here thats to me very plausible
 
I don't believe in the Loch Ness Monster or Sasquatch but I believe MS exists. I've seen it on my rig in Crysis, Far Cry, CoD4. After tweaking endless settings and failing to rid MS, it must have been ghosts in the machine. That, or my RAM wasn't SLI certified, lol.
 
V-sync dosent alter the data stream granted but it alters the way its presented. You can just as easilly get a jittery effect with all the buffers saturated and V-sync enabled. If it was a hardware timing issue as in CPU -buffers -GPU then V-sync would iron it out in alot of cases.
However its obviously a deeper issue than that and for my money its much more likley to be based aroung timings and syncronisations within the rendering process than by lack of CPU power.
You may be correct and adding the MT effect may help to smooth the problem out.
Guess we are at wait and see time. When DX11 gets properly introduced and we get some systems with DX11 software and hardware we will be able to tell if your assesment is correct or not.

Where did i say anything about MS at 60 fps ?

Mactronix
 
OK, first, MS doesnt occur until lower fps occur. v-sync is usually synced with monitor , or 60 cycles, and thats what I meant.
Heres what I see as my point not getting thru. Its not that Im saying the cpu isnt fast, or powerful enough, but the data required for its reaction is piped in 1 long stream, 1 that is , as Toms put it, overloaded. I think thats where you may be misunderstanding me here. Due to the MT and the texture compression available on DX11, the calls will be dispersed, and easier for the cpu to get its hands on the data, not that the cpu is inaduquate. Its just going to be a much more even flow of data TO the cpu.
But evidence of cpu strain can be glimpsed by the fact, often a quad will do better than a dual, because even if the game is MT optimised, those particular opts are for other cores anyways, and having a dual, has no where to go but the 2 cores available, thus the overloading can be seen there. Thats our current scenario, which will be radically changed with the onset of DX11, where well see AI for 1 core as we have now, plus a multi sourced flow of data from the primary thread, which can be dispersed.

Like I said, in CF/SLI, the cards have only half the work to do, so the buffers arent overloaded on the gpu side. Since cpus are serial in nature, and use cache, itd be hard for me to believe theyd be overloaded either. Its the flow