Hi twelch82. I believe we're making pretty similar points. We do say input lag matters in "twitch" games such as FPS and (as someone mentioned in these comments) racing games. The point we're making is, it doesn't really matter if you're playing e.g., Civilization V, and that in many other cases it matters only to an extent.The part about input lag is incorrect. Let's say your baseline input lag is 250ms. Does that mean that if you are playing a game with less than 250ms lag, it doesn't matter? No. Whatever lag the game has is added on top of your own lag. Saying it doesn't matter is like saying brakes that stop a second faster don't matter because it may take you a second to react and press the brakes in the first place.Secondly, input lag is not consistent. Input, like rendering, is usually processed once a frame. That means that if you click the mouse, it actually will register in the game the next time the game logic for a frame is processed. When will that be? Well it could be immediate, it could be as much as a full frame away. If you are running at 30 FPS, that means the amount of input lag added is variable between 0-33 ms. Why that matters is because consistent lag can be compensated for, but seemingly random lag is more difficult to deal with.
The behavior of V-Sync is implementation-specific (GPU drivers/engine). By using render ahead, swap chains, Adaptive V-Sync, etc., you can avoid frame halving.The info on V-Sync causing frame rate halving is out of date by about a decade. With multithreading the game can work on the next frame while the previous frame is waiting for V-Sync. Just look at BF3 with V-Sync on you get a continous range of FPS under 60 not just integer multiples. DirectX doesn't support triple buffering.
In theory yes, in practice, you won't see perfect 2048 MB utilization ... say you're trying a 30 MB texture with a 2020 starting utilization ... as you can't go to 2050 nor "partially load" the texture, first you need to swap out another asset, and the load that 30 MB texture, almost always falling short of 2048 anyway. So, in essence, values so high mean that you've hit the card limit (that test IIRC was still on the GTX 690 which is limited at 2 GB ... and, yes, probably best to have repeated on the Titan ... but point would have been the same)QUOTE: "not even 2 GB card is sufficient" (in reference to 2028 MB)Isn't 1 kb 1024b; 1mb 1024 kb; and so on meaning that there would be enough memory? A technicality, i know, but still.
Games are currently going through a transitionary period due to the new consoles coming out. Those consoles have an awesome 8GB of shared RAM (about 5GB used for games), which developers are going to start using more and more. Games are already pushing the limits with settings turned up (BF4 gives off loads of VRAM problems with using a 2GB card on the Ultra preset), so as more and more demanding games come out with these consoles, 2GB just ain't gonna kick it.I'm fine with that because I upgrade my cards as soon as a newer generation comes out, but for people holding onto cards for 2, 3 or 4 years at a time, being told to invest in the extra VRAM when 1440p is slowly becoming the new standard and texture resolutions themselves are going up, isn't that bad of an idea really. But hey, I'll happily answer your questions when you come back in a couple years time asking why you're having massive juddering playing intensive games 😉I would love to see a Tom's article on debunking the 2GB vs 4GB graphic card race. For instance, people spam the Tom's forum daily giving advice to buy the 4GB GTX 770 over the 2GB. Truth is, the 4 GB costs 50$ more and offers NO benefit over the 2GB. Even worse, I see people buying/suggesting the 4GB 760 over a 2GB 770 (which runs only 30$ more and is worth every penny). I am also curious about the 4GB 770 sli scenario. For everything I have seen, even in Sli the 4GB offers no real-world benefit (with the exclusion of MAYBE a few frames per second higher at 3 monitor scenarios, but the rates are unplayable regardless so the gain is negligible). The other myth is that the 4GB 770 is more "future proof". Give me a break. GPU and future proof do not belong in the same sentence. Further, if they were going to be "future proof" they would be "now proof". There are games that are plenty demanding to show the advantage of 2gb vs 4gb - and they simply don't. It's tiring seeing people giving shoddy advice all over the net. I wish a reputable website (Tom's) would settle it once and for all. In my opinion, the extra 2 GB of RAM isn't going to make a tangible difference unless the GPU architecture changes...
Eh, I meant to say ON in both cases, but, without Chris editing also my forum posts, sometimes I get distracted 😉Clarification: what I mean by render-ahead is the D3D SetMaximumFrameLatency method (basically a frame queue). You are perfectly correct about the input lag when using this method.Framerate halving is a V-Sync issue. Your statement about render-ahead not solving halving in a V-Sync OFF scenario is therefore nonsensical.mechan :1. Render ahead, if you mean tweaking the pre-rendered frames queue at the driver level, will not solve frame-halving in a V-sync OFF scenario. What it does is help prevent stuttering in V-sync ON scenarios by providing a longer queue of ready-to-render frames. If you're using render ahead in its proper sense of setting flip queue size, then yes, that is essentially equivalent to triple (or more) buffering and in that sense, it solves frame rate halving at the cost of additional input lag
Hi twelch82. I believe we're making pretty similar points. We do say input lag matters in "twitch" games such as FPS and (as someone mentioned in these comments) racing games. The point we're making is, it doesn't really matter if you're playing e.g., Civilization V, and that in many other cases it matters only to an extent.The part about input lag is incorrect. Let's say your baseline input lag is 250ms. Does that mean that if you are playing a game with less than 250ms lag, it doesn't matter? No. Whatever lag the game has is added on top of your own lag. Saying it doesn't matter is like saying brakes that stop a second faster don't matter because it may take you a second to react and press the brakes in the first place.Secondly, input lag is not consistent. Input, like rendering, is usually processed once a frame. That means that if you click the mouse, it actually will register in the game the next time the game logic for a frame is processed. When will that be? Well it could be immediate, it could be as much as a full frame away. If you are running at 30 FPS, that means the amount of input lag added is variable between 0-33 ms. Why that matters is because consistent lag can be compensated for, but seemingly random lag is more difficult to deal with.