Jaroslav Jandek :
Christopher Shaffer :
V-Sync functions by dropping FPS to 1/2 at a certain level; there's no two ways about it. Adapative Sync (Nvidia) is a little bit more "adaptive" but still has similar issues.
Yes, V-Sync halves framerate if there are no more frames in the queue on scan. The issue is with "lag spikes" where you temporarily get below 60 fps (stuttering) - not when you are constantly below 60 fps (you may just as well disable V-Sync altogether), which is solved by the methods I have described.
Adaptive V-Sync has issues, yes. But, frame halving simply
does not happen with adaptive V-Sync (I get whatever framerate below 60 that the GPU can handle, not just 30).
Christopher Shaffer :
That said, there is a TON of outdated, incorrect and just plain WRONG information about VRAM. For example, you claim VRAM size doesn't matter, yet point to the (correct) fact that the size of the textures and the number of on-screen textures are affected by VRAM... If your VRAM is not large or fast enough to handle the quality setting you've chosen, your FPS and ultimately, response time, are negatively affected. This is science and math, not opinion.
What he wrote in the article is "
The memory capacity a graphics card ships with has no impact on that product's performance, so long as the settings you're using to game with don't consume all of it.", which is pretty much what you say, I don't see your issue? The reason the performance suffers with high resolution or quality settings is because memory is allocated from the system's RAM instead of the GPU's VRAM via virtual memory when you run out of VRAM.
The rest I don't have issue with.
P.S.: it is not always clear who are you responding to (why not use quotes?).
Adapative V-sync works well, but is nothing in comparison to G-Sync (seriously, I'm in love with this. It's like playing a live video instead of a game) and yes, you're correct regarding the 1/2 FPS. However, Adapative V-sync is an Nvidia -only technology, though AMD has provided some similar solutions, though in my experience, they are more laggy. G-sync is literally lag-free, tear free. (Yes, Nvidia, I would love a job promoting G-Sync because it's awesome and I truly believe in it.)
The key in your last paragraph addresses your own concern with my statement about VRAM:
The reason the performance suffers with high resolution or quality settings is because memory is allocated from the system's RAM instead of the GPU's VRAM via virtual memory when you run out of VRAM.
This is only partially true. The on-screen textures need to be in VRAM prior to actually being displayed by the GPU. Only true Unified Architecture will allow what you're describing.
That "suffering performance" will only occur when you don't have enough VRAM to handle the resolution and quality settings, so indeed your statement explains clearly why more VRAM *is* a benefit in the right situations, not why it is not a factor.
The issue I had in his article was the statement that most systems won't benefit from a GPU with more than 1GB of VRAM, and his incorrect description of how cards behave in SLI/Crossfire:
A GeForce GTX 690 with 4 GB, for instance, behaves like two 2 GB cards in SLI. Moreover, when you add a second card to your gaming configuration in CrossFire or SLI, the array's graphics memory doesn't double. Each card still has access only to its own memory.
Two 4GB cards do NOT act like 2, 2GB cards in SLI. They act like one, very fast 4GB card and have access to 4GB of VRAM. The purpose of SLI is to link the cards for communication of which card is handling which frame, etc.
If you own a 1 GB card and a 1080p display, there's probably no need to upgrade right this very moment. A 2 GB card would let you turn on more demanding AA settings in most games though, so consider that a minimum benchmark if you're planning a new purchase and want to enjoy the latest titles at 1920x1080.
As you scale up to 1440p, 1600p, 2160p or multi-monitor configurations, start thinking beyond 2 GB if you also want to use MSAA. Three gigabytes becomes a better target (or multiple 3 GB+ cards in SLI/CrossFire).
I'm sorry, but this is just WAY off. If you're only planning to play 5 year old games like Skyrim (which is a GREAT game, nothing against it) then, yes, 1GB is probably fine.
However, if you're playing modern titles like BF4, ACIV, etc. where Ultra settings and MSAA can push the VRAM usage beyond the 3GB mark (BF4 averages about 2.6GB on my system but has hit as high as 3.2 at times) then a 3 or 4GB card is a great consideration, even at 1080p. I have Afterburner records showing that 64 player games with a lot of action going on with Ultra, 1080p and MSAA x4 have hit 3.2GB, meaning a 3GB card would not suffice, FPS would suffer, and it's relatively close to the real-world available VRAM of about 3.6GB.
His article is simply misleading. Can you play the games with a 1GB card? Yes. Will you be maxing out graphics settings or even have some on Ultra? Maybe, probably not if you want decent FPS.
You can get by with less VRAM but you have to consider this: on a 2GB card you really only have about 1.8GB available at any given time. If BF4 on even near-Ultra settings uses 2GB or more, then having a 3GB or more card makes perfect sense.
Just think about system memory: if your minimum available *system* RAM for a given game was 2GB, you wouldn't just install 2GB, right? Because at least ~1GB would be used by Windows.
This is the misleading thing in system requirements on games: when it says "2GB DDR3" it doesn't mean that's what you need in your system. It means that's how much the GAME needs to run.