The Myths Of Graphics Card Performance: Debunked, Part 1

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Sep 22, 2013
482
0
10,810
While I agree with the top post that there is some misinformation here, your post is just as misinformed. V-Sync functions by dropping FPS to 1/2 at a certain level; there's no two ways about it. Adapative Sync (Nvidia) is a little bit more "adaptive" but still has similar issues.

G-Sync is really the only solution ATM.

That said, there is a TON of outdated, incorrect and just plain WRONG information about VRAM. For example, you claim VRAM size doesn't matter, yet point to the (correct) fact that the size of the textures and the number of on-screen textures are affected by VRAM. That alone is enough to tell you that depending on texture depth, your FPS *and* input lag could be affected by your VRAM. If your VRAM is not large or fast enough to handle the quality setting you've chosen, your FPS and ultimately, response time, are negatively affected. This is science and math, not opinion.

I will also state that 1GB vs. 2GB vs 3GB vs 4GB on GPUs is subjective: it depends entirely on the game, resolution, texture depth, etc. and it makes a massive difference in games that require a high FPS (such as BF4) and games that have a lot of things going on all the time on-screen (like AC4). I was able to hit a solid 100FPS on a single GTX 770 4GB while an online buddy was only able to hit about 85FPS with the EXACT same configuration, except that he had the 2GB version of my same card (Gigabyte GTX 770 OC 4GB / 2GB). The difference is that the game will use around 70% of your available VRAM. If you have 2GB, this is about 1.4GB, if you have 4GB, obviously it's twice that.To say that xAmount of VRAM is not necessary completely ignores modern game architecture.

Once I got about 1/2 way through this article, I checked the date again to make sure it wasn't 2010. I'm still surprised it's not.

Those of you who keep coming in here and saying that 4GB or whatever above 2GB makes no difference are ignoring the obvious: I can monitor a game via Afterburner and verify the exact amount of VRAM used at any time. During busy times in BF4 it regularly uses 2.6GB of VRAM. And yes, I am using SLI, and yes I'm aware that it's still effectively 4GB. And no, Afterburner isn't stupid and somehow gets the math wrong. The game simply has a lot going on under Ultra settings.

The games tested and comparisons made are virtually irrelevant. Skyrim? Hitman? Give me a break. Skyrim doesn't even use DX11. Comparison of the average VRAM used by Steam users? WTF does that even matter? So, I don't need more VRAM because a bunch of other people using Steam won't pony up the cash for it?

That's what is called a logical fallacy: you're attributing one factor to another even though there is no relation between the two other than coincidence. It's the equivalent of saying I don't need size 12 shoes because the average shoe size is 10. It really depends on my feet, just like VRAM really depends on the use case, not the average user.

The average user can't hit 144FPS @1080p on Ultra settings on a modern game, either.

 

Jaroslav Jandek

Honorable
Jan 13, 2014
103
0
10,680
Yes, V-Sync halves framerate if there are no more frames in the queue on scan. The issue is with "lag spikes" where you temporarily get below 60 fps (stuttering) - not when you are constantly below 60 fps (you may just as well disable V-Sync altogether), which is solved by the methods I have described.
Adaptive V-Sync has issues, yes. But, frame halving simply does not happen with adaptive V-Sync (I get whatever framerate below 60 that the GPU can handle, not just 30).

What he wrote in the article is "The memory capacity a graphics card ships with has no impact on that product's performance, so long as the settings you're using to game with don't consume all of it.", which is pretty much what you say, I don't see your issue? The reason the performance suffers with high resolution or quality settings is because memory is allocated from the system's RAM instead of the GPU's VRAM via virtual memory when you run out of VRAM.

The rest I don't have issue with.
P.S.: it is not always clear who are you responding to (why not use quotes?).
 

panzerknacker

Honorable
Jul 13, 2012
24
0
10,510


Yours seems good.

Now mine: http://postimg.org/image/lr47zc31b/
and another: http://postimg.org/image/y8mj7gvzt/

With cable, I always ping <1ms.

WiFi is not recommended for gaming tbh.
 

sINCE THE router is in the other room about 40 feet away, running a wire isn't very practical
 
It's not the cheapest option, but I find a wireless tunnel across two APs much more reliable than pure Wi-Fi. My PC and PS3 both connect over the tunnel to the modem and I've had no problems with it. If running a cable isn't an option and Wi-Fi isn't cutting it, a wireless tunnel might be an option for you.
 

lp231

Splendid


The amount of vram in a graphic card as this article has stated doesn't determine the performance, so a 4GB vram card will not perform faster than a card with 2GB of vram.
4GB is future proof because games that loads lots and lots of textures benefits from a graphic card with more vram. Right not not too much games loads that much textures, except for some titles. As times pass by and newer games come out, then a graphic card with 4GB of vram will be the better choice in the long run. Not everyone has the money to burn on a new graphic card every 6 months.
Between a GTX 760 4GB vs a GTX 770 2GB, it's a no brainer to pick the GTX 770 for faster performance, but spend a bit more and grab the 4GB version.

 


well, not really, when the 2gb card runs out of memory, then the article states you get performance hit and stuttering due to it swapping back and forward from system ram. So it does determine performance depending on your screen resolution/game type/detail settings. In a scenario where vram isn't a problem, no there is no performance difference, i guess that is what you were getting at.....
 

atvalens

Reputable
Feb 16, 2014
1
0
4,510
I would love to see a Tom's article on debunking the 2GB vs 4GB graphic card race. For instance, people spam the Tom's forum daily giving advice to buy the 4GB GTX 770 over the 2GB. Truth is, the 4 GB costs 50$ more and offers NO benefit over the 2GB. Even worse, I see people buying/suggesting the 4GB 760 over a 2GB 770 (which runs only 30$ more and is worth every penny). I am also curious about the 4GB 770 sli scenario. For everything I have seen, even in Sli the 4GB offers no real-world benefit (with the exclusion of MAYBE a few frames per second higher at 3 monitor scenarios, but the rates are unplayable regardless so the gain is negligible). The other myth is that the 4GB 770 is more "future proof". Give me a break. GPU and future proof do not belong in the same sentence. Further, if they were going to be "future proof" they would be "now proof". There are games that are plenty demanding to show the advantage of 2gb vs 4gb - and they simply don't. It's tiring seeing people giving shoddy advice all over the net. I wish a reputable website (Tom's) would settle it once and for all. In my opinion, the extra 2 GB of RAM isn't going to make a tangible difference unless the GPU architecture changes...
Obviously you've never played any game that supports modding or third party texture upgradability. My modded FNV and especially Skyrim installs regularly reported VRAM usage over 2.5GB with HD textures and ENB. If I didn't have a 3GB HD7970, I would have crashed even more than I already did on Bethesda flawed engine.Obviously you've never played any game that supports modding or third party textures. My modded FNV and especially Skyrim installs regularly reported VRAM usage close to or over 2.5GB with HD textures and ENB. If I didn't have a 3GB HD7970, I would have crashed even more than I already did on Bethesda's flawed engine.
 

DrgnLrd

Reputable
Feb 16, 2014
1
0
4,510
So the Titan both cooled down and got faster with it's overclock while having the same fan speed..... interesting
 
Sep 22, 2013
482
0
10,810


Adapative V-sync works well, but is nothing in comparison to G-Sync (seriously, I'm in love with this. It's like playing a live video instead of a game) and yes, you're correct regarding the 1/2 FPS. However, Adapative V-sync is an Nvidia -only technology, though AMD has provided some similar solutions, though in my experience, they are more laggy. G-sync is literally lag-free, tear free. (Yes, Nvidia, I would love a job promoting G-Sync because it's awesome and I truly believe in it.)

The key in your last paragraph addresses your own concern with my statement about VRAM:
The reason the performance suffers with high resolution or quality settings is because memory is allocated from the system's RAM instead of the GPU's VRAM via virtual memory when you run out of VRAM.

This is only partially true. The on-screen textures need to be in VRAM prior to actually being displayed by the GPU. Only true Unified Architecture will allow what you're describing.

That "suffering performance" will only occur when you don't have enough VRAM to handle the resolution and quality settings, so indeed your statement explains clearly why more VRAM *is* a benefit in the right situations, not why it is not a factor.

The issue I had in his article was the statement that most systems won't benefit from a GPU with more than 1GB of VRAM, and his incorrect description of how cards behave in SLI/Crossfire:
A GeForce GTX 690 with 4 GB, for instance, behaves like two 2 GB cards in SLI. Moreover, when you add a second card to your gaming configuration in CrossFire or SLI, the array's graphics memory doesn't double. Each card still has access only to its own memory.

Two 4GB cards do NOT act like 2, 2GB cards in SLI. They act like one, very fast 4GB card and have access to 4GB of VRAM. The purpose of SLI is to link the cards for communication of which card is handling which frame, etc.

If you own a 1 GB card and a 1080p display, there's probably no need to upgrade right this very moment. A 2 GB card would let you turn on more demanding AA settings in most games though, so consider that a minimum benchmark if you're planning a new purchase and want to enjoy the latest titles at 1920x1080.

As you scale up to 1440p, 1600p, 2160p or multi-monitor configurations, start thinking beyond 2 GB if you also want to use MSAA. Three gigabytes becomes a better target (or multiple 3 GB+ cards in SLI/CrossFire).

I'm sorry, but this is just WAY off. If you're only planning to play 5 year old games like Skyrim (which is a GREAT game, nothing against it) then, yes, 1GB is probably fine.

However, if you're playing modern titles like BF4, ACIV, etc. where Ultra settings and MSAA can push the VRAM usage beyond the 3GB mark (BF4 averages about 2.6GB on my system but has hit as high as 3.2 at times) then a 3 or 4GB card is a great consideration, even at 1080p. I have Afterburner records showing that 64 player games with a lot of action going on with Ultra, 1080p and MSAA x4 have hit 3.2GB, meaning a 3GB card would not suffice, FPS would suffer, and it's relatively close to the real-world available VRAM of about 3.6GB.

His article is simply misleading. Can you play the games with a 1GB card? Yes. Will you be maxing out graphics settings or even have some on Ultra? Maybe, probably not if you want decent FPS.

You can get by with less VRAM but you have to consider this: on a 2GB card you really only have about 1.8GB available at any given time. If BF4 on even near-Ultra settings uses 2GB or more, then having a 3GB or more card makes perfect sense.

Just think about system memory: if your minimum available *system* RAM for a given game was 2GB, you wouldn't just install 2GB, right? Because at least ~1GB would be used by Windows.

This is the misleading thing in system requirements on games: when it says "2GB DDR3" it doesn't mean that's what you need in your system. It means that's how much the GAME needs to run.
 

Jaroslav Jandek

Honorable
Jan 13, 2014
103
0
10,680
I agree.

Incorrect. You should read about WDDM and virtual memory - it contains a GPU Memory Manager that virtualizes graphics memory (I have posted some info about it a few posts above). If you were an engine programmer, you could see for yourself by checking the DXGI_ADAPTER_DESC structure (to be precise the DedicatedVideoMemory, DedicatedSystemMemory and SharedSystemMemory fields). Oh, I've just remembered you can see the numbers in driver info (right-click Desktop -> Screen Resolution -> Advanced Settings - it displays the fields from DXGI_ADAPTER_DESC).

Also, I agree that more VRAM does have its benefits - especially if you like high resolution textures (or a lot of them).

That is actually what happens. GPUs in SLI configuration mirror resources. What happens with SLI is that each GPU renders a portion of frame rendering (eg. half a frame or every second frame) and needs fast access to the same resources, which is not yet feasible over PCIE or SLI bridge (bcs. of bandwidth and latency).

In my opinion, the article explained the memory requirements of games well. I think when a reader reads the whole article, he will get a good idea how much memory is he going to need (depending on the required graphics quality).

FYI, the GPU memory manager usually (some drivers suck) does a good job at relocating less needed memory to system RAM or disk (same goes to Windows VM) - eg. when you are playing a game, it usually moves the DWM stuff to disk/system RAM (when running low on VRAM) and loads it back to VRAM when you alt-tab (from full-screen exclusive mode).
 

Valaska

Distinguished
Apr 7, 2013
12
1
18,510
Lol wow "AMD's feeble attempt at replicating G-Sync" in other words, exactly replicating it, completely, and its open source so it DOES work on desktop. Way to not sound like biased prats Tomshardware.
 

Jaroslav Jandek

Honorable
Jan 13, 2014
103
0
10,680
G-Sync is a complete high-performance solution including display HW.
What AMD did is they hijacked a power-saving feature of some displays. The likely implications of that being lower performance and more lag.

 
Sep 22, 2013
482
0
10,810


A significant portion of the latency in WiFi is caused by packet errors, which are an inherent part of WiFi, not a problem w/your router. A modulated signal within a radio wave that is literally running into millions of other radio waves, some that are the same frequency, some that are harmonic frequencies of that 2.4GHz signal, is unfortunately going to come with some latency. WiFi radios solve this through redundant check digits and layered transmissions, but this means that some things must be sent multiple times to ultimately be read correctly by the receiver, resulting in Xms of delay/latency.

The 5GHz range is intended to alleviate some of this noise, as are the various optional channels on your router. Try playing with these settings. 5GHz is generally going to be a less noisy spectrum, and playing with the various channel settings you can often find a channel your neighbors are not using as most people leave this to default.

You can also try QoS settings, turning WMM off (both on the router and your computer) and playing with signal strength. Note that using the settings that "extend" the range of your router does not always improve your latency, in fact very often it does the opposite. This is due to the fact that the radio signal does not always have a direct path to your receiver, or may not "decide" to take that path. This means if you're extending it's range, you might actually be sending packets further (figuratively speaking) before reaching your PC.

Also note that a 1ms ping time as reported by a web-based program is NOT your ping time to your router. The ping time is the time it takes to send a packet from your IP ADDRESS to the target and back again. You could feasibly have a 10ms delay to your router, yet still see a ping of 1ms reported by the web-based program, such as a game server. You need to check your local ping time to your local address to see what your network's internal latency is (like panzernacker did, though I'm curious why his DNS server is the same as his default gateway).
 


There is an extremely limited range (3m) with most routers incorporating a 5GHz band. Most devices won't even see it. Can you offer any advice when attempting to use the 5GHz band?
 
Sep 22, 2013
482
0
10,810


If you're using a modern router with dual-band and a lot of radios you probably won't have an issue using it. I have my prioritized devices on 5GHz and everything else on 2.4. For example, my printer is on 2.4GHz but my iPad and laptop are on 5GHz. My phones are on 2.4 as well. Basically, I'm divvying up the bandwidth. It does not make a massive difference where you're going to notice real-world difference in daily use, but it may give that few ms advantage in latency in gaming if it's the only option you have.

As far as advice, get a good gigabit router, dual-band 802.11n. Netgear makes a couple of very solid options such as the R6300. You'll probably pay in the range of $125-200 for a good wifi router but it is certainly worth it for the performance. There are a number of great WiFi options from other manufacturers, but for your money, Netgear is a solid choice. I personally stay away from Linksys/Cisco for home use for a number of reasons, not the least of which is security, but I think D-Link and Netgear are always solid choices. Tom's has also given an award or two to Asus routers. I personally like Netgear's web management interface for ease of use and reliability.
 
I would like to add stay clear of Belkin routers. I had one and it kept dropping downloads in the middle
on both wireless and wired connections. Took that piece of crap out of service and replaced it with netgear and now no problems at all.
 


I have a netgear WNDR3500 something something that I've never been able to get on the 5GHz band with any device. No device sees it. The SSID is turned on and broadcasting. No wireless device even detects the network. I know other people have had issues with the 5GHz band as well. But since we're debunking myths about graphics cards, I'll refrain from saying anything else about it in this thread.
 

Isaiah4110

Distinguished
Jan 12, 2012
603
0
19,010


Ummm... Skyrim was released November 2011, so it is just barely over 2 years old... It isn't even halfway to being a "5 year old game" yet.
 

qiplayer

Distinguished
Mar 19, 2011
38
0
18,530
A few important things to consider by the performance: scalability, if adding a second gpu how much more you get (for eventual future upgrades)Wich games are implemented for nvidia and wich one for amd cards, it does matter! Depends on your tastes and like.Memory size does matter if you want more screens or play with ultra graphic details.Memory bandwidthIf you can unlock the card, change it's bios. I have the most expensive hardware on the market, still another bios gives +25% more performance. And it's very welcome. Then for example the gtx titan is overvoltable, can get 30-40% more performance only with this+a good waterblock. The gtx titan black edition won't be overvoltable. So wich card has more value?Then price isn't part of "performance". How can you put price within performance?? Of course the price does matter.If you wanna get the best out of your card I suggest to inform yourself on forums (o. c. dot net is the best source imho), compare waterblocks talking with forum members, and ask them suggestions, (not to people who read reviews) those are the one who knows the bottleneck (by the gtx titan the bottleneck are the vrm not the core overclockability, so choosing the WB it matters more if it gets good cooling to these than to the core). Consider applying the best thermal pads on the market, the fujipoly that have heat transportation of 15 w/mk, the standard ones of the waterblock will have 5w/mk or less. So the temperature of the vram and vrm cud go down from 105c to 40c. It's a huge difference.
 

Valaska

Distinguished
Apr 7, 2013
12
1
18,510


Are you kidding? The two result in nearly the EXACT same results, only one doesn't take proprietary hardware kits that cost EXTRA or have an upgraded videocard... AMD's might not be doing the exact same thing but it effectively replicates it, DP 1.3 compleeetely makes G-Sync just another CUDA.
 

liumeiyu119

Reputable
Feb 19, 2014
1
0
4,510
Best Graphic Drivers Download Driver update utility, update all drivers automatically!http://www.lionsea.com/product_graphicdriversdownloadutility.php
 

Jaroslav Jandek

Honorable
Jan 13, 2014
103
0
10,680
You clearly have no idea what it takes to implement variable refresh rate on the display side... The G-Sync FPGA has 768MB of fast on-board memory to do image processing and buffering (dynamic pixel overdrive algorithms, etc.). That is why it is expensive (and the fact that it is an FPGA - in the future, I expect an ASIC solution) and it has to be done this way so performance and visual quality is excellent (eg. it must be done to support FreeSync as well)!

We do not know if the results will be exactly the same as you claim - I would be actually surprised, if they were - I expect inconsistencies in display manufacturers' implementations. G-Sync is supported by all Kepler-based GPUs. I also do not like it is proprietary...
 
Windows VRAM consumption:I didn't see it (Part 2?) however I'm pretty certain you'll be incorrect on how that works from the start of your article.Let's say Windows uses 300MB of VRAM, then I launch a game. It STILL uses 300M for Windows until I start to run out of VRAM (80% usage?). At that point, it swaps the contents to System RAM leaving almost all of the VRAM for the game.I did test this myself, and I'm fairly certain this is correct. It's the same way the Pagefile works only that is System RAM to the main HDD/SSD.I suspect this mechanism doesn't work in WINDOWED mode.
 
Status
Not open for further replies.