The Myths Of Graphics Card Performance: Debunked, Part 2

Status
Not open for further replies.
i've always had a beef with gpu ram utillization and how its measured and what driver tricks go on in the background. For example my old gtx660's never went above 1.5gb usage, searching forums suggests a driver trick as the last 512mb is half the speed due to it's weird memory layout. Upon getting my 7970 with identical settings memory usage loading from the same save game shot up to near 2gb. I found the 7970 to be smoother in the games with high vram usage compared to the dual 660's despite frame rates being a little lower measured by fraps. I would love one day to see an article "the be all and end all of gpu memory" covering everything.

Another thing, i'd like to see a similar pcie bandwidth test across a variety of games and some including physx. I dont think unigine would throw much across the bus unless the card is running out of vram where it has to swap to system memory, where i think the higher bus speeds/memory speed would be an advantage.
 
Implying that an i7-4770K is little better than an i7-950 is just dead wrong for quite a number of games.

There are plenty of real-world gaming benchmarks that prove this so I'm surprised you made such a glaring mistake. Using a synthetic benchmark is not a good idea either.

Frankly, I found the article was very technically heavy were not necessary like the PCIe section and glossed over other things very quickly. I know a lot about computers so maybe I'm not the guy to ask but it felt to me like a non-PC guy wouldn't get the simplified and straightforward information he wanted.
 

eldragon0

Honorable
Oct 8, 2013
142
0
10,690
If you're going to label your article "graphics performance myths" Please don't limit your article to just gaming, It's a well made and researched article, but as Photonboy touched, the 4770k vs 950 are about as similar as night and day. Try using that comparison for graphical development or design, and you'll get laughed off the site. I'd be willing to say it's rendering capabilities are actual multiples faster at those clock speeds.
 


even if toms's hardware really did their own test it doesn't really useful either because their test setup won't represent million of different pc configuration out there. you can see one set of driver working just fine with one setup and totally broken in another setup even with the same gpu being use. even if TH represent their finding you will most likely to see people to challenge the result if it did not reflect his experience. in the end the thread just turn into flame war mess.



this has been discussed a lot in other tech forum site. but the general consensus is there is not much difference between the two actually. i only heard about AMD cards the in game colors can be a bit more saturated than nvidia which some people take that as 'better image quality'.
 
Just something of note... You don't necessarily need Ivy Bridge-E to get PCIe 3.0 bandwidth. Sandy Bridge-E people with certain motherboards can run PCIe 3.0 with Nvidia cards (just like you can with AMD cards). I've been running the Nvidia X79 patch and getting PCIe gen 3 on my P9X79 Pro with a 3930K and GTX 980.
 
"Smaller cards fit into longer slots (for instance, a x8 or x1 card into a x16 slot), but larger cards do not fit into shorter slots (a x16 card into a x1 slot, for example). "

Not sure that this is correct - aren't some slots made open-ended, so that you can use all of the slot's lanes but not all of the card's lanes?
 

chaospower

Distinguished
Mar 8, 2013
67
2
18,640
Implying that an i7-4770K is little better than an i7-950 is just dead wrong for quite a number of games.

There are plenty of real-world gaming benchmarks that prove this so I'm surprised you made such a glaring mistake. Using a synthetic benchmark is not a good idea either.

Frankly, I found the article was very technically heavy were not necessary like the PCIe section and glossed over other things very quickly. I know a lot about computers so maybe I'm not the guy to ask but it felt to me like a non-PC guy wouldn't get the simplified and straightforward information he wanted.

You're wrong. The old I7s had much slower clocks, thats why the performance wasn't as good as the newer ones, and many becnhmarks would confirm that they are indeed slower. But when clocked similarly the difference is indeed incredibly small. The author of this article knows what he's talking about.
And here's proof (It's 3770k and not 4770k, but im sure most people would agree the difference isn't great between those two)
http://alienbabeltech.com/main/ivy-bridge-3770k-gaming-results-vs-core-i7-920-at-4-2ghz/5/
 

boju

Titan
Ambassador


To my understanding Photonboy meant stock speeds 950 @ 3.06ghz would have a hard time keeping up with the later generations with higher clocks along with other improvements. A genuine analyst in this case wouldn't consider overclocking as it's too inconsistent to set a standard. Photonboy would be well aware of the potential in overclocking.

I love your link though, haven't seen one comparing an overclocked 920. Since the first generation i7's there has been a GHz+ stock increase. My 920 is @ 3.9, not quite 4.2 as in the link, though I'd imagine not far off in performance and I'm glad the early generations 'when' overclocked to that level are still making a statement :)

 
Myth, stuttering...

1 big card is better than it prevent stutn 2 because it prevents stuttering... dead wrong, two cards gives better FPS. I have been using CF for the last 5 years and I never had stuttering issues.
 

Dantte

Distinguished
Jul 15, 2011
173
60
18,760
Your Cable chart is wrong, HDMI 1.4 DOES support 4K, it does not support 4K @60hz, this is what HDMI 2.0 addresses!

"HDMI 1.4 was released on May 28, 2009, and the first HDMI 1.4 products were available in the second half of 2009. HDMI 1.4 increases the maximum resolution to 4K × 2K, i.e. 4096×2160 at 24 Hz"
 

Math Geek

Titan
Ambassador
I always wondered about the pcie bandwidth and how much was actually used. theoretical bandwidth doubling each generation left me wondering if the cards were keeping up and using it or if the technology was simply a "because we can" kind of thing. i know this is not the last word on the subject but i won't feel as apprehensive using pcie 2.0 mobo's for the near future. clearly this is ample bandwidth for the average build and average user.
 

SheaK

Reputable
May 8, 2014
3
0
4,510
I'm leaving this comment mostly because I'm sure this is an uncommon bit of information, and I wanted to throw out my community contribution.

I have a Xeon workstation based on two E5 2696v2 chips (24 cores, 48 threads) with 128gb RAM running a 2tb Mushkin Scorpion Deluxe PCIe and currently a single 295x2 at 2560x1600 (system is liquid cooled).

With regard to mantle: I've noticed a massive difference and with core-parking disabled mantle gives as much as a 15% increase in FPS. It appears mantle can consume upward of 24 threads during normal use, and I've seen CPU hit 97% of 48 threads (!!!) briefly during loads.

As this system has 80 PCI-E lanes available, the second 295x2 will probably give a more linear "crossfire" scaling than some systems with bottlenecks.

I'm passing the information along as many workstations like this are not used for both work and for gaming.
 

Eggz

Distinguished
OMG! So glad you made this series. There's a lot of assumption-based randomness echoing throughout the interwebz, and this does a great job of addressing graphics-related questions in a disciplined way.
 

bygbyron3

Distinguished
Feb 27, 2011
125
0
18,710
Some (very few) TVs can display a 120 Hz signal from a PC without any interpolation. The refresh rate has to be overridden or set custom.
 
Status
Not open for further replies.