Games are currently going through a transitionary period due to the new consoles coming out. Those consoles have an awesome 8GB of shared RAM (about 5GB used for games), which developers are going to start using more and more. Games are already pushing the limits with settings turned up (BF4 gives off loads of VRAM problems with using a 2GB card on the Ultra preset), so as more and more demanding games come out with these consoles, 2GB just ain't gonna kick it.I'm fine with that because I upgrade my cards as soon as a newer generation comes out, but for people holding onto cards for 2, 3 or 4 years at a time, being told to invest in the extra VRAM when 1440p is slowly becoming the new standard and texture resolutions themselves are going up, isn't that bad of an idea really. But hey, I'll happily answer your questions when you come back in a couple years time asking why you're having massive juddering playing intensive games 😉I would love to see a Tom's article on debunking the 2GB vs 4GB graphic card race. For instance, people spam the Tom's forum daily giving advice to buy the 4GB GTX 770 over the 2GB. Truth is, the 4 GB costs 50$ more and offers NO benefit over the 2GB. Even worse, I see people buying/suggesting the 4GB 760 over a 2GB 770 (which runs only 30$ more and is worth every penny). I am also curious about the 4GB 770 sli scenario. For everything I have seen, even in Sli the 4GB offers no real-world benefit (with the exclusion of MAYBE a few frames per second higher at 3 monitor scenarios, but the rates are unplayable regardless so the gain is negligible). The other myth is that the 4GB 770 is more "future proof". Give me a break. GPU and future proof do not belong in the same sentence. Further, if they were going to be "future proof" they would be "now proof". There are games that are plenty demanding to show the advantage of 2gb vs 4gb - and they simply don't. It's tiring seeing people giving shoddy advice all over the net. I wish a reputable website (Tom's) would settle it once and for all. In my opinion, the extra 2 GB of RAM isn't going to make a tangible difference unless the GPU architecture changes...
Thank you Hansrotec! You have a valid point on watercooling. Building custom water loops to cool graphic cards is an advanced option certainly worth considering. While incredibly more efficient, similar principles apply as far as air flow (through the radiator), ambient temperature, and throttling... those are the ones we wanted to point attention to in the article.- FilippoI thought my last comment might have seemed to negative, and i did not mean it in that light. I did enjoy the read, and look forward to more!
Jaroslav Jandek and Lowenz are correct. For a specific implementation within a DirectX game engine of Triple Buffering, see Valve's Source engine. That DirectX does not support triple buffering is another myth ... although a harder one to detect as many game engines built on top of DirectX do not implement that feature.- Filippo+1DirectX DOES support TB by using DXGI_SWAP_CHAIN_DESC.BufferCount = 3; (or D3DPRESENT_PARAMETERS.BackBufferCount = 2; for DX9). It actually supports more than triple buffering - Direct3D 9Ex (Vista+'s WDDM) supports 30 buffers.
Hi Adroid and thank you for your comment! I think we take a pretty clear position on this matter at the bottom of page 6. But agree, much more could be said!- FilippoI would love to see a Tom's article on debunking the 2GB vs 4GB graphic card race. For instance, people spam the Tom's forum daily giving advice to buy the 4GB GTX 770 over the 2GB. Truth is, the 4 GB costs 50$ more and offers NO benefit over the 2GB. Even worse, I see people buying/suggesting the 4GB 760 over a 2GB 770 (which runs only 30$ more and is worth every penny). I am also curious about the 4GB 770 sli scenario. For everything I have seen, even in Sli the 4GB offers no real-world benefit (with the exclusion of MAYBE a few frames per second higher at 3 monitor scenarios, but the rates are unplayable regardless so the gain is negligible). The other myth is that the 4GB 770 is more "future proof". Give me a break. GPU and future proof do not belong in the same sentence. Further, if they were going to be "future proof" they would be "now proof". There are games that are plenty demanding to show the advantage of 2gb vs 4gb - and they simply don't. It's tiring seeing people giving shoddy advice all over the net. I wish a reputable website (Tom's) would settle it once and for all. In my opinion, the extra 2 GB of RAM isn't going to make a tangible difference unless the GPU architecture changes...
Hi gallovfc and thank you for your comments! Disabling V-sync (i.e., V-sync OFF) may result in screen tearing that shows over HORIZONTAL lines. Try enabling V-Sync (or forcing it in the display driver, if an option), and see if that solves the issue.I know how V-Sync works, vertically it's ok, but I was experiencing horizontal tearing on my 3240x1920 Eyefinity setup (using my HD 7950 Boost). Wth is that ??
Hi mamasan2000, indeed, agree, input lag matters for all "twitch" games, racing being one of them! - FilippoInput lag matters in more than just First person shooters. Try a racing simulator. With input lag.The other thing mentioned above about VRAM amount. 2Gb vs 4Gb. Arma 3 is pushing my 2Gb card, the textures in that game take up approx. 1.8 Gb of VRAM. So I would say 4Gb is about future proofing, especially if you plan to run on anything higher than 1080p resolutions. In Arma 3 you can't even run certain texture resolutions etc if you have 1 Gb VRAM. Only available with 2+Gb.
Thank you FunSurfer, too much moving images around . We'll fix it!On Page 3: "In the image below" should be "In the image above"
Thank you kzaske! - FilippoIts' been a long time since Tom's Hardware had such a good article. Very informative and easy to read. Thank you!
Thank you Formata! It's definitely not a novel concept in engineering. It's actually applied in a very wide range of fields. So I found it surprising that it wasn't yet applied to video card performance... but, then, we figured out putting wheels on suitcases only after we landed on the moon... - Filippo"Performance Envelope" = GeniusNice work Filippo
Thank you ddpruitt! If there are imprecisions, do point them out - we strive to be accurate in our articles and are happy to make corrections/clarifications when they are warranted! Part 2 won't have the same cards / different memory comparison, but it WILL have same cards / different PCIe configuration which should also be interesting! Cheers - FilippoVery good article even though there are some technical errors. I look forward to seeing the second half! I would also be interesting in seeing some detailed comparisons of the same cards with different amounts and types of VRAM and case types on the overall impact of performance.
Pretty good foresight ... yes, we do talk about inputs and connectivity in part 2! DisplayPort, HDMI (2.0), DVI ... we'll talk about all of them and the implications from a display/card perspective. - FilippoIn the second part of the article, can you debunk the HDMI2.0 connectivity?MYTH: No graphics card has HDMI2.0 port, therefore you cannot game at more than 30fps on 4K HDTV, even if TV is HDMI2.0 enabled.
The is no reason to maintain a given ratio between the two. With rare exceptions, overclocking the core will give you the largest performance gains as that is typically the bottleneck in most games. Overclocking memory will help but shouldn't have nearly as big an impact - FilippoNice article! I would like to know more about overclocking, specifically core clock and memory clock ratio. Does it matter to keep a certain ratio between the two or can I overclock either as much as I want? Thanks!