The Myths Of Graphics Card Performance: Debunked, Part 2

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Bill_VA1

Distinguished
Sep 9, 2010
11
0
18,510
Has Tom's ever written an article about video cards in non-gaming PC systems? I'd like to know more about issues like; How much memory does my GPU really need? Clarity of text? Nvidia vs AMD? Refresh rates for normal use (not games)?

I'm a web developer and I use mostly Adobe Creative Suite applications, play occasional movies while I work, and general computer use stuff. Usually, my PC budget doesn't have much of a limit (last one was $3500).
 

Eggz

Distinguished
There also remain unclarity about the distinction between what amount of VRAM an application or game CAN USE versus what it NEEDS. I've seen several graphs from new titles like Battlefield 4 in 4K on new graphics cards where the game uses more than 5GB of VRAM on cards that have it (e.g. Titan Black with 6GB). But the same articles show that there's a negligible difference in frame rate - less than 5 fps - when the Battlefield 4 runs at 4K using the same settings on a similar card with less than 5GB of VRAM (e.g. 780 ti with only 3GB).

What's up with that? Does the massive memory bandwidth of the 780 ti create a throughput current fast enough for the VRAM to fill and flush such that a buffer of 3GB suffices, even though a larger buffer can be used?

Perhaps there's a distinction similar to that in system RAM, where some RAM is in active use while other RAM holds information in Cache for easy access. That happens in systems with lots of RAM, but systems with less RAM run fine without so much dedicated to Cache. I'm talking about 8GB vs 16GB or 32GB, not situations in which there's insufficient system RAM.

It's mysterious to me, but it's clearly occurring. For instance, Photoshop CAN USE 64GB of RAM fairly easily, but the same task will run at roughly the same speed on the same system if you simply pull out a couple RAM sticks and drop things down to 32GB. Perhaps the same things are going on in system RAM and VRAM - don't know. I also don't know how to check.

Perhaps Tom's can shed some light on this issue
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


Amount of graphics memory, in general, does not affect frames per second. It affects the resolution of textures you can load and the level of antialiasing you can run. In two words, it affects image quality. See Part 1 of the Myths series for details.



Some algorithms can make use of more memory, but aren't always noticeably faster. Typically until you end up page-swapping on disk (a massive slowdown), Photoshop-style tasks depend exclusive on CPU (or GPU, if accelerated) speed.



We try :)

Filippo

 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


If the tests I run with the 4770k are any indication, unless you're running CPU-bound (e.g., Skyrim), all you'd get is 1-2 FPS less, assuming the CPU frequency delta is 500/600 MHz with whichever other system you are comparing it to.

Filippo
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


Very true, and well-summarized.

Filippo
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


If you picked it up to play games, per what we say in the article, it was probably among the very best "bang for your back".

Don writes a separate article on best video cards for your money as well. You may want to check that out.

Filippo
 

dovah-chan

Honorable


http://www.tomshardware.com/reviews/firepro-w9100-performance,3810.html
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


After trying out the Sharp PN-K321 at 4K@30Hz, and essentially wanting to shoot myself after a couple of minutes of use, I must have just subconsciously refused to accept that "30Hz" meant "supported" for any given resolution on a PC display.

From a purely technical standpoint though, thou art correct. We should have specified the @60Hz caveat.

Filippo
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


I'd agree with your statements broadly.

[Deleted - misread the comment above :pt1cable:]

Filippo
 

Eggz

Distinguished


Yes, I read the whole article. So that much is clear, except that VRAM will affect FPS if there's not enough of it (e.g. a hypothetical GTX 980 with only 512 MB VRAM would drop frames like crazy with high textures and AA even at 1080p). But that's beside the point. I was only talking about high VRAM-usage scenarios that have high textures and AA at high resolutions. Below is a graph from Digital Storm showing that BF4 on Ultra at 4K with 4x AA can use more than 3 GB of VRAM when it's running on Titan (see blue bar labeled GTX Titan 4x AA), and that it will max out VRAM on the 780 ti (see blue bar labeled GTX 780ti 4x AA):

bf44kvram.jpg


At the same time, a single 780 ti performs better than a single Titan when it comes to FPS, even though the same settings are used (Ultra, 4K, 4x AA):

bf44kfps.jpg


The additional VRAM usage was unnecessarily addressed. That shows that, despite the fact that a program can use more VRAM than a graphics card has, there is a certain amount of used VRAM that the program didn't actually need to be using in order to perform without sacrificing textures, resolution, or AA.

--->M-Y-S-T-E-R-Y<---



Photoshop will page swap without enough RAM. The point was to show that there's a certain amount of RAM that's more than the amount that results in page swapping, but less than the amount that Photoshop actually uses, that will not result in performance drops. It seems analogous to what happened in BF4 above.




And you do a good job, which is why I ask these nuanced questions. Where else could I go besides here? :)
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


See Part I, and the sheer amount of memory that the windows desktop manager uses (up to 410MB on Win 8.1 at 4k). Those assets would normally stay in graphics memory, but can be paged out while gaming full-screen with little or no issue. That alone can explain what you see in your charts - essentially the 780Ti pages out windows desktop manager assets, while the Titan does not need to.

Wouldn't make a difference on FPS, but would explain the behavior you describe. Might not be -the- answer, but is certainly a possible hypothesis.

What it does NOT explain is the higher usage of the Titan with no AA involved, which honestly makes me question those measurements - I did similar measurements across cards and saw nothing of the sort of a 400 MB usage different. Something else is fishy with those tests you posted ...

Filippo

Filippo





Thank you sir or ma'am!

 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
Error correction: Page 2 pci bandwidth chart. The 4x pci3 red bar says 3490 should be 3940.

-----

Thank you for this article and its predecessor. Well written!

Good to get stuff like this out I wish a lot of this would become more common knowledge. Lead to having to deal with less tards who think xxx is faster then yyy when its not in any meaningful sense. As well as keeping companies inline who try to dupe people into believe that one product is significantly faster then another when its not in any meaningful way.

Speaking on AA and 2160p vs 1080p. I would have liked to see some benchmarks showing the difference between 4x AA on 1080 vs AA OFF on 2160. You touched on the issue, that AA isnt needed anywhere near as much when pixel density goes up; but i would have liked to see some metrics.
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


Good catch on the chart. I'll pass that on to production.

On the data comparison you suggest ... have plenty of that data on hand (gosh I ran the Valley benchmark until that background musing started creeping in my dreams). Not really thinking of publishing it though - those are really two entirely different worlds' 2160p is four times the pixels individually rendered of 1080p. 4x MSAA does NOT equate to rendering 4x as many pixels (SSAA does, and that algorithm is deprecated for obvious reasons).

2160p vs 1080p you can expect, if you are not CPU limited at 1080p, a straight FPS drop of a factor of four (so e.g., 120 -> 30). Many machines powerful enough to drive 2160p displays WILL be CPU limited at 1080p though, so the real factor will be lower ... but then you are comparing two different bottlenecks - a comparison of limited use.

By contracts 4x MSAA at 1080p, or at 2160p for the matter, assuming graphics memory is enough to support it (Skyrim with the high-res texture pack, for instance, will crash a 2 GB card at 4x MSAA @ 2160p - it certainly crashes my GTX 690), will have roughly a 30% impact. Again assuming no CPU limitation. In case of CPU limitation, you simply will see no difference (unless the GPU becomes the bottleneck, then you'll see a <30% drop).

Bottom line, it's too convoluted of a comparison to explain in press ...

Filippo



 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
I haven't noticed any articles going into it in depth is all. I wasn't aware that modern 4x aa isn't rendering all 4 pixels. I guess its been awhile since i cared to explore AA methods. Quite awhile actually. If the 30% works as an estimate, that would be 60 fps on 1080p with no AA, or 42fps with 4x MSAA, vs 15 fps on 4k with no aa. More or less.

So 2.8x the power needed instead of 4x. Trying to estimate it and seeing the actual benchmarks are two different things tho! Its something id like to see explored tho. Because AA should become pretty meaningless once we go from 90ish dpi to 180ish dpi, at normal desktop monitor viewing distances.

Tho its not something you can show someone on a 1080 monitor, you cant show them what a 2160 without aa vs 1080 with aa looks like.

I cant wait for a adaptive refresh rate scheme to become the standard tho. Will be nice to finally say goodbye to tearing/studdering. I just hope nvidias expensive proprietary solution wont win out. Or if it does win, they get the cost WAY down, and open it up to everyone.
 
"In my experience, of the hardware you can buy, displays tend to last the longest. I bought a 30" Apple Cinema Display in 2004, brought it across the ocean with me and I'm still happily using it as a wonderful secondary display 10 years later. Thus, my personal mantra is to upgrade your monitor rarely, but spare no expense on it when you do"

FINALLY someone agrees, I am so sick of people SQUEEZING out their budget to get a 970 (or whatever is being thrown around as the best GPU buy at the time), and then saying "well, I got about $100 left over, I'll take that average quality 1080p panel"
 

knowom

Distinguished
Jan 28, 2006
782
0
18,990
LOD bias is probably my favorite GPU tweak love to make the image way sharper shimmering be damned it's better than a blurry feeling image.
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640


Or are modern consoles PC ports?

:lol:
 

Iluvnvidia

Reputable
Nov 12, 2014
1
0
4,510
A suggestion for your future tests would be to add a few really low end and old graphics card (both desktop and laptop) for the tests. This would really put the respective architecture, peripherals and TDP limits to the test and bust the myths.
Also you can show how the respective drivers have evolved with time and given more output for the same card. My GTX 550 TI has shown an average improvement of 31% till date just because of drivers. Mind you such improvement without OC or architechtural change makes me wonder what simple software optimization would do to increase performance.
 


I used to have crossfire 6850's and they stuttered like crazy in certain games, sometimes using 1 card gave lower fps but more consistent fps. Also 2 cards in sli/crossfire compared to another single card of the same capabilities, the single card will always have lower frame-time variance (aka stuttering). This has also been measured and proven with frame-time benches. The problem was a lot worse before amd addressed it in their drivers, but still not as good as a comparable single card. Maybe you should do some reading and get your facts right: http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-AMD-Radeon-R9-290X-CrossFire-and-4K-Preview-Testing . there are plenty of other articles similar to this confirming the results.
 


i suspect the nvidia driver doesnt allow the vram buffer to be 100% full, as i have seen on my old gtx 660's. Probably pages off to system ram but how do you monitor that? Or it just keeps old data in the titan as it has room, where the 780ti buffer gets flushed of some unnecessary data.
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
sense emailing me on replies broke and doesn't seem to work, i don't follow comments often but i will here.

video games are something that many people record, and share gameplay of, has toms ever done or will ever do an in depth recording breakdown from optimal setup to editing video in consumer or pro applications with rendering on cpu only, gpu only, or mixed... i would love to see this as its an area that i'm interested in, but cant do the tests because of softwares not having legitimate demos to i can't buy a 600+$ gpu just to test rendering on it, like i cant get 4 different cpus to see which one would serve me best there too.
 

qlum

Distinguished
Aug 13, 2013
195
0
18,690
On the matter of cpu performance I believe that getting a good cpu still is worth it or rather one with good single threaded performance as the games that are bottlenecked like Planetside 2 for example don't benefit as much from multithreading. While a gpu that is on the weak side can often be compensated by lowering the settings a cpu bottleneck doesn't have this luxory.

That being said my 3570k will probably not be out of favor anytime soon and I may need to oc it to 4.7ghz for planetside but there isn't much that would my experience there on the market.

Intel really has slowed down a lot after sandy bridge.
 
Status
Not open for further replies.