• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

How Much RAM Does Your Graphics Card Really Need?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
the only game really benefitting from much more Graphics memory seems to be GTA 4 and even that should be noted to be overrated. Because this game likes to load alot in the graphics memory but you can easily overcome this by manualy chaning the settings this way you can run the game at much higher details. So you dont really need the 1GB version. better not waste the cash and go either for 2 times a 4870 512mb or wait for the availability of the radeon 5850 i recommend the last one. i donno if it will ultimately out preform a 4870 CF but it has the balls for DX11 openCL aswell as the DX11 tesselation standard. and its powerusage is actualy lower then that of either the 4870 in idle and it even beats it by 9 watts under load
 
Good job with this article. I'm from Poland and with my miserable english I understood a lot (all). But for me this only reminded me facts which I actually have known. But it ain't make it useless. It's auxilary.

One thing I have to admit - not too much techician terms or scientific result, too much casual talking.

Have a nice day.
 
[citation][nom]radiowars[/nom]Great article guys. I've debated this many times with friends- good to see a definite result. Heh, GTX 275 w/ 896 works well @ 1680x1050 for me. I doubt I'd need anything higher than that for my resolution.[/citation]

True true, am running it on 1080p and it works pretty well. I don't go for AA much.
 
If you are using a single card for two monitors (using multiple monitor support), does it mean you automatically need double the memory?
 
One problem with this article: all the tests are done on Vista 64-bit. I'll bet that on a 32-bit OS, the 512MB card will actually out-perform the 1GB card at most any resolution. That's because the amount you add to VRAM is actually deducted from your system RAM. Also, given the performance hit for Vista, I'm sure that WinXP 32-bit with 512MB VRAM would have smoked all the numbers in this article.

Someday, 64-bit will absolutely be the way to go. But right now, it's a neat technology that brings no benefits. Ditto for 1GB VRAM.
 
[citation][nom]Fungo Batte[/nom]Someday, 64-bit will absolutely be the way to go. But right now, it's a neat technology that brings no benefits. Ditto for 1GB VRAM.[/citation]

Anyone who buys a Win 7 32bit is either ignorant or stupid at this point, so 64 bit isn't for 'some day' but for 'today' - or well thursday, but same thing.
 
Off topic but may be of interest.

There is a deluge of requests around the geek forums re what grapics card to buy, but never have i seen a requestor specify whether they plan an always on PC vs "on only when in use".

A$ .157 / kw sydney australia - oct 09 (A$=~.92 of a $US now)

usa prices link:

http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html
(so sydne prices v similar to california - 15.29usc/ kw )

so:

in an always on PC, a graphics card which draws 10 watts less at IDLE than an alternative card
(My logic here is - If you having fun at FULL LOAD, then who cares what the load power draw/cost is, but in most cases this is a small portion of the week and can logically be ignored - IF, and this is a big IF, your power save / sleep settings are set and working correctly)


.157$/1000x10=.00157$ (substitute your local cost - always increasing though, negating you bean counters net present worth objections)

x 24x365

=A$13.75pa for each extra 10w idle draw (i hope the math is right)

If you use air conditioning all year, you can theoretically double this cost.

If, however, you use electric bar radiators for heating all year, then I afraid, my dear Watson, the "elementary" laws of physics dictate that you must waste no further time on this argument. It does not concern you, except to say that you are in the enviable position of soon being able to buy a formerly high end graphics card (am thinking dual core Nvidea here) for about the cost of a decent electric radiator, and getting quite decent framerates thrown in free with your heating bill.

Using the below specs and above prices, a hd 4870 (90w) costs us$84.39 more to idle all year in california than the 5850/5870 (27w) at current prices. In about 18 months the better card should repay the premium paid for it.

Hope this helps some of you cost justify your preferred card to the powers that be.



ATI HD 5870 & 5850 idle power 27W
ATI 90W Radeon HD 4870 idle
5770 18w
5750 16w

Some nvidea card idle power specs: GPU Idle Watts

NVIDIA GeForce GTX 280 30 W

Gigabyte GeForce 9800 GX2 GV-NX98X1GHI-B 85 W

ZOTAC GeForce 9800 GTX AMP! Edition ZT-98XES2P-FCP 50 w

FOXCONN GeForce 9800 GTX Standard OC Edition 9800GTX-512N 48 W

ZOTAC GeForce 9800 GTX 512MB ZT-98XES2P-FSP 53 W


MSI NX8800GTX-T2D768E-HD OC GeForce 8800 GTX 76 W


ZOTAC GeForce 8800 GT 512MB AMP! Edition ZT-88TES3P-FCP 33 W


Palit GeForce 9600 GT 1GB Sonic NE/960TSX0202 30 W


FOXCONN GeForce 8800 GTS FV-N88SMBD2-OD 59 w
 
Great article. I want to upgrade graphics. I have a 8800 GTS 512. 1920x1200 monitor. Do you guys think I should just upgrate to the latest card I can afford, or by another 8800 to finish the dual SLI, the 8800 cost about 190 new.
 
That's a very objective analysis, which is important. However when I upgraded from 2 8800GTs in SLI to a single 5870 with 1gig. Even when I turn the settings up and only get 16 FPS on Crysis. The difference is quite huge. No hitching, a lot less studder. Just because the FPS counter renders the same doesn't mean its the same quality of game play.

You needed to add your subjective analysis as well as the objective. Along with your SLI tests.
 
Kronos I think a 5850 and 5870 are HUGE upgrades from a 8800GTS, I had 2 8800GT's 512 in SLI and upgraded to a 5870. Despite the frame rate difference (which is a lot) the card is just much better. Say I have settings set to where I achieve around 15FPS on my SLIed 8800s. Then set things (much higher) so my 5870 gets around 15FPS. This is in Crysis btw. The 5870 has no to very little input lag, very little hitching, no stuttering. Its a completely different game.

I can't tell you that you should upgrade, But that was my experience. I think alot of that comes down to the 1gig of ram over 512.

Keep in mind, if you SLI 2 512meg cards, You're still only effectively getting 512megs of Video ram because the ram mirrors its self across both cards.
 
[citation][nom]neiroatopelcc[/nom]Anyone who buys a Win 7 32bit is either ignorant or stupid at this point, so 64 bit isn't for 'some day' but for 'today' - or well thursday, but same thing.[/citation]

Crysis in 64bit mode smokes crysis in 32bit mode, with the greater use of ram you see, again, alot less hitching, stuttering, and swap file ( which causes hitching and stuttering) So even if you FPS is 2 or 3 FPS higher. The over all experience is lower.

THIS IS WHY YOU NEED A SUBJECTIVE REVIEW. As well as objective.
 
Wow...the image examples for the 2 settings is truly appreciated.

As is perhaps all gamers strive to achieve the realism of the world in which one plays, it makes the most sense to obtain the equipment that will get us there...However, due to financial restriction like myself with (5 Kids & the B-word Budget)has a direct impact on what is on my wish list.

I do not mind being behind the curve about 8 months...for me, the reduction of price is low enough to not only purchase, but to utilize and then sell again with the maximum amount of resale value.

I never did get to try these cards, and perhaps I may have to pass on them...due to the impact that DX11 will have this year.

 
The problem with realism is one that a graphics card can't solve. The biggest problem imo that is still left unattended is the fact that the human eye only sees sharp images exactly where you're focussing. In a video game everything's sharp, or only the middle part is sharp - but it never reacts true to life with the blur, so it never feels sharp. I suppose googles could solve that issue, but as of yet I've not heard of any that solve this problem (they'd have to monitor the direction the pupil is sitting).
The second biggest realism issue is destructable objects/enviroments. We've gotten somewhat closer with gpu physics calculations, but patents and the fact that a destructable enviroment dramatically alters level and plot designings it's unrealistic to expect anything 'plot related' will be destructible unless the designers meant for you to blow up a door etc. Imagine playing an nba match and dirk shattering the basket court, or a fan being injured by a flying stone in a racing game? they might have to stop the event, and that's not good in a game of that sort. But it would be a cool addition to realism! Imagine driving a rallye game, and finding out that the exchaust you lost caused two other cars to not finish because they hit it?
THAT is a realism issue we won't see solved any time soon.

As for dx11 we'll see the benefit when it's there, but I don't expect a dx11 card to be a must have until 2011. Long's you have an 4830 or better you can play any game on the marked at a reasonable quality.
 


Agreed plus the texture packs can eat a lot of Vram on there own to ware a 512mb was recommended and in some cases even more. I remember one test ware this game would use over 600mb from the frame buffer before other cost.
 
Regardless of the size of the frame buffer it is being used and what isn't in the frame buffer usually ends up in virtual memory since windows sends most of what is low priority or isn't loaded very quickly while reserving the system ram for the game engine and other programs. If the core can't keep up then the whole brick isn't good enough depending on the game but if the fps is already good or isn't what the player is after then quality is in favor of having a larger frame buffer. Still got the taste of not enough vram from the voodoo days ware the cards lacked the dime feature so that the texture as well frame data had to be on the same buffer compared to other cards for the generation and current cards.
 


I thought a game would simply crash if you exceed the limits of your frame buffer? seems to be what happened to me when I replaced the 768mb 8800gtx with a 512mb 4870
 


Not having enough simply would limit features and hurt performance but for some quality over fps. I rather have it be a slug at 20fps and look wonderful than run 60fps and look like crap.
 


Try any driving game at sluggish fps and you'll realize that smooth is more important
 
Status
Not open for further replies.