Nvidia Fermi GF100 Benchmarks (GTX470 & GTX480)

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
http://www.tomshardware.com/reviews/geforce-gtx-480,2585.html


"Crysis is perhaps the closest thing to a synthetic in our real-world suite. After all, it’s two and a half years old. Nevertheless, it’s still one of the most demanding titles we can bring to bear against a modern graphics subsystem. Optimized for DirectX 10, older cards like ATI’s Radeon HD 4870 X2 are still capable of putting up a fight in Crysis.

It should come as no shock that the Radeon HD 5970 clinches a first-place finish in all three resolutions. All three of the Radeon HD 5000-series boards we’re testing demonstrate modest performance hits with anti-aliasing applied, with the exception of the dual-GPU 5970 at 2560x1600, which falls off rapidly.

Nvidia’s new GeForce GTX 480 starts off strong, roughly matching the performance of the company’s GeForce GTX 295, but is slowly passed by the previous-gen flagship. Throughout testing, the GTX 480 does maintain better anti-aliased performance, though. Meanwhile, Nvidia’s GeForce GTX 470 is generally outperformed by the Radeon HD 5850, winning only at 2560x1600 with AA applied (though it’s an unplayable configuration, anyway)"




"We’ve long considered Call of Duty to be a processor-bound title, since its graphics aren’t terribly demanding (similar to Left 4 Dead in that way). However, with a Core i7-980X under the hood, there’s ample room for these cards to breathe a bit.

Nvidia’s GeForce GTX 480 takes an early lead, but drops a position with each successive resolution increase, eventually landing in third place at 2560x1600 behind ATI’s Radeon HD 5970 and its own GeForce GTX 295. Still, that’s an impressive showing in light of the previous metric that might have suggested otherwise. Right out of the gate, GTX 480 looks like more of a contender for AMD's Radeon HD 5970 than the single-GPU 5870.

Perhaps the most compelling performer is the GeForce GTX 470, though, which goes heads-up against the Radeon HD 5870, losing out only at 2560x1600 with and without anti-aliasing turned on.

And while you can’t buy them anymore, it’s interesting to note that anyone running a Radeon HD 4870 X2 is still in very solid shape; the card holds up incredibly well in Call of Duty, right up to 2560x1600."



It becomes evident that the GTX470 performs maybe 10% or less better than the 5850 on average, and the GTX480 performs maybe 10% or less better than the 5870 on average. Yet the power consumption of a GTX470 is higher than a 5870, and the GTX480 consumes as much power as a 5970.

The Fermi generation is an improvement on the GTX200 architecture, but compared to the ATI HD 5x00 series, it seems like a boat load of fail... =/



------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Original Topic:

http://www.hexus.net/content/item.php?item=21996


The benchmark is in FarCry2, which is an Nvidia favoring game.


==Nvidia GTX285==

...What you see here is that the built-in DX10 benchmark was run at 1,920x1,200 at the ultra-high-quality preset and with 4x AA. The GeForce GTX 285 returns an average frame-rate of 50.32fps with a maximum of 73.13fps and minimum of 38.4fps. In short, it provides playable settings with lots of eye candy.

==Nvidia Fermi GF100==
Take a closer look at the picture and you will be able to confirm that the settings are the same as the GTX 285's. Here, though, the Fermi card returns an average frame-rate of 84.05fps with a maximum of 126.20fps and a minimum of 64.6fps. The minimum frame-rate is higher than the GTX 285's average, and the 67 per cent increase in average frame-rate is significant...

==Lightly Overclocked ATI 5870==
The results show that, with the same settings, the card scores an average frame-rate of 65.84fps with a maximum of 136.47fps (we can kind of ignore this as it's the first frame) and a minimum of 40.40fps - rising to 48.40fps on the highest of three runs.

==5970==
Average frame-rate increases to 99.79fps with the dual-GPU card, beating out Fermi handily. Maximum frame-rate is 133.52fps and minimum is 76.42fps. It's hard to beat the sheer grunt of AMD's finest, clearly.

Even after taking into account the Nvidia-favoring of FarCry2, the results are not too shabby...in line with what we've been expecting - that the GF100 is faster than the 5870. It will most likely be faster in other games, but to a smaller degree... Now the only question is how much it will cost...
 


why can't you compare a single GTX480 to the dual GPU 5970
if you can't then you shouldn't be allowed to compare the 5870 to the GTX480 since the later has 50% more transistors and that wouldn't be fair

though i would hardly say NV is doomed if they can't beat it, performance crown is always back and forth between these two

also since the GTX480 appears faster (can't say for certain as its not out yet) than the 5870 then it would make sense that sli 480's vs crossfire 5870's the 480 would win as well, and also for 3 cards
 


I think what gamer is trying to say is that for a while, ATI fans used the whole "a 5870 shouldn't be faster than a 295 beause a 295 has two GPUs, so it is unfair to compare them" argument, and now when nVidia fans use it, according to the ATI fans they are full of it. I wouldn't say that was a hugely widespread problem, but certain people DID use that argument, so gamerk's point is still valid. Hypocrisy is hard to see in oneself 😛
 


i know, but not everyone said that (in fact a rational person would have compared them), i thought it was fair to compare them, close in price, top end cards, why not
 
My main complaint was (and still is) that the GTX295s were being phased out and were extremely rare and expensive. On Newegg now I don't see any. While the 5970 is very rare now as well, we have already heard plans for future cards like it so it is not dieing out.
 


well, i think NV made a bad decision phasing out the GTX2xx series so early, they could have gotten a few more people to buy them (considering the delay of FERMI)
 

You absolutely can compare them. A single 5970 is cheaper than a pair of 480s, and only uses up 2 slots (rather than 4). If the 5970 and 480 are priced similarly, then that is the logical comparison to make.
 



Couldn't agree more. They could have done a die shrink and spec refresh similar to ATI did and they wouldn't be bogged down in scrutiny and delays. They certainly are not shy about product refreshes so I fail to understand why they didn't try to tap more potential out of the g200 arch.
 


40nm G92 at insane clock speeds would be...something. I'm leaning towards awesome :bounce:
 


Yes, but the SLI'd 480's would (probably) be faster, so you get a basic price/performance decision to make. Same argument as 480 v 5870: One is faster, but more expensive.

I've said since the 9800GX2 that you simply can not compare dual GPU cards against single GPU varients in a fair manner. I held that stance when the 4870X2 came out, when the 295 came out, and still hold the same stance. Thats unlike a few people here at least...

Comparing the entire 400 series and saying its a failure because the best single GPU card can't beat a dual GPU card is silly. You can argue P/P, but thats about it.
 


p/p is EXACTLY what we are arguing. and its the most important consideration to make. nvidias gtx 480 is more expensive than a 5970 but far less powerful.

there won't be a dual chip version of fermi, since the die is so big and hot. the chip is esentially a dual setup. its as big as too chips mashed together into one.

the single most important thing is price/performance, and as it stands ATI firmly hold the crown. it doesn't matter how many cores are on each card.
 


and your fanboism shows, the reason you don't like it is because the GTX480 is slower, again, if they are priced close to each other, that is the fair comparison
and i'll say it again, if this is a reason to say they can't be compared, then i say the 480 and 5870 can't be compared due to transistor count as the 480 has ~50% more transistors (3bil vs 2bil)

and i think the only people that have said the the GTX4xx series is going to be a failure are the ATI fanboys
 
Well, I wouldn't jump to conclusions on Fermi's price. We don't really have much (any, really) real evidence on what it is going to cost. It may be extremely expensive, or it may not be. If they have already written off this gen as a money loss, who know what they might do.
 


how is there anything remotely fanboyist about that post?

ATI offer a faster card at a lower cost. if you can;t figure out why that wins them the performance crown, you are an idiot.

i don;t care about the number of cores or transistors.

the gtx480 is currently being pre-ordered at 700USD. this makes it MORE expensive than a 5970. yet its considerably less powerful.

if nvidia suddenly drop the price to 400USD, then i will no doubt agree that they have pulled it off. btu right now thats not happening.
 


It is what YOU are arguing, apparently it isn't what he is arguing.



Wrong. There will be a dual chip version, ASUS has already started designing one.



You're right, that is important. We have no idea what a GTX 480 will cost, the pre-order prices are incorrect, so you cannot use that as a basis for any argument. It has already been said dozens of times those prices aren't based on any valid info, by nVidia at least.

We also have no idea how a GTX 480 will perform or how much heat/power it will produce/use. It could come close to matching a 5970 with only 250W of power or be miles away with 300W of power, you don't know and nor do I, so why would you just assume the more negative situation?

The 480 could turn out to dominate the high end p/p market, for all you know a 480 could be $300 and it would be better to get three of them and SLI them to get better performance than QuadFire 5970s. That is highly improbable, but at this point I can call that a fact as much as you can say it isn't.
 


I'll belive that when i see it. have you seen the size of the GPU? two of those on a card would be next to impossible without same crazy cooling, and probably a triple slot shroud.
just because asus are planning something (like a mars card) doesn't mean its going to be a card available or purchasable by the general gamer.

perhaps you are right about the price, but in my experince the early prices the retailers have are spot on. they get info before anyone else and spread it before they should. most news in the gaming world comes form leaked info on amazon...

performance wise, we have seen two benchmarks already, so we aren't going to see some massive change between those and 3rd party benches.

and again, we already know it uses mroe power than a 5870, why else would it need a 6+8 pin PCIe for power? and more power directly scales to more heat.

you are probably right, and i am being too quick to call it a failure. but i like to call it as i see it, and from the info i have gathered, i do not see this card succeeding to the same degree as the 5870 or 5970.
 
Everyone always overestimates heat dissipation. It really isn't THAT big of a deal to get rid of the heat, I wouldn't be worried about that.

The only thing in my mind that really presents a problem for a dual-slot Fermi card is the 300W limit. Even if one card only uses 225W, two of those would be 150W over the limit, which is 50W more than what a 5970 can top out at.

So their dual-GPU Fermi card would have to be a couple downclocked 470's or a couple SEVERELY downclocked 480's.

The thing is though, that is actually okay, because the only people that buy those cards are the people that are probably willing to OC them, so all nVidia would have to do is underclock them stock, and just give you an easy OCing program for their cards, presto, you have the fastest card on the market and still stay under 300W.

Price is the other factor obviously, but I've heard prices all over the map, so until I see the actual prices I can't comment on that.

EDIT: And btw, power does NOT directly scale with heat. You aren't taking into consideration the efficiency of the architecture. Theoretically, if you have architecture that was 100% efficient, you wouldn't produce any heat whatsoever, obviously that wont happen, but Fermi cards could actually produce a lot less power than everyone thinks they will, and as already mentioned, heat isn't that hard to get rid of anyways.

I work with 2000W lamps at work, we stick like 8 of them in a tank and have to cool that, so I know a think or two about cooling 😛 A $1000 heat exchanger from MacMaster-Carr and a $800 BERG chiller and you can cool 70,000W of heat easysauce.
 
Status
Not open for further replies.