Nvidia Fermi GF100 Benchmarks (GTX470 & GTX480)

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
http://www.tomshardware.com/reviews/geforce-gtx-480,2585.html


"Crysis is perhaps the closest thing to a synthetic in our real-world suite. After all, it’s two and a half years old. Nevertheless, it’s still one of the most demanding titles we can bring to bear against a modern graphics subsystem. Optimized for DirectX 10, older cards like ATI’s Radeon HD 4870 X2 are still capable of putting up a fight in Crysis.

It should come as no shock that the Radeon HD 5970 clinches a first-place finish in all three resolutions. All three of the Radeon HD 5000-series boards we’re testing demonstrate modest performance hits with anti-aliasing applied, with the exception of the dual-GPU 5970 at 2560x1600, which falls off rapidly.

Nvidia’s new GeForce GTX 480 starts off strong, roughly matching the performance of the company’s GeForce GTX 295, but is slowly passed by the previous-gen flagship. Throughout testing, the GTX 480 does maintain better anti-aliased performance, though. Meanwhile, Nvidia’s GeForce GTX 470 is generally outperformed by the Radeon HD 5850, winning only at 2560x1600 with AA applied (though it’s an unplayable configuration, anyway)"




"We’ve long considered Call of Duty to be a processor-bound title, since its graphics aren’t terribly demanding (similar to Left 4 Dead in that way). However, with a Core i7-980X under the hood, there’s ample room for these cards to breathe a bit.

Nvidia’s GeForce GTX 480 takes an early lead, but drops a position with each successive resolution increase, eventually landing in third place at 2560x1600 behind ATI’s Radeon HD 5970 and its own GeForce GTX 295. Still, that’s an impressive showing in light of the previous metric that might have suggested otherwise. Right out of the gate, GTX 480 looks like more of a contender for AMD's Radeon HD 5970 than the single-GPU 5870.

Perhaps the most compelling performer is the GeForce GTX 470, though, which goes heads-up against the Radeon HD 5870, losing out only at 2560x1600 with and without anti-aliasing turned on.

And while you can’t buy them anymore, it’s interesting to note that anyone running a Radeon HD 4870 X2 is still in very solid shape; the card holds up incredibly well in Call of Duty, right up to 2560x1600."



It becomes evident that the GTX470 performs maybe 10% or less better than the 5850 on average, and the GTX480 performs maybe 10% or less better than the 5870 on average. Yet the power consumption of a GTX470 is higher than a 5870, and the GTX480 consumes as much power as a 5970.

The Fermi generation is an improvement on the GTX200 architecture, but compared to the ATI HD 5x00 series, it seems like a boat load of fail... =/



------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Original Topic:

http://www.hexus.net/content/item.php?item=21996


The benchmark is in FarCry2, which is an Nvidia favoring game.


==Nvidia GTX285==

...What you see here is that the built-in DX10 benchmark was run at 1,920x1,200 at the ultra-high-quality preset and with 4x AA. The GeForce GTX 285 returns an average frame-rate of 50.32fps with a maximum of 73.13fps and minimum of 38.4fps. In short, it provides playable settings with lots of eye candy.

==Nvidia Fermi GF100==
Take a closer look at the picture and you will be able to confirm that the settings are the same as the GTX 285's. Here, though, the Fermi card returns an average frame-rate of 84.05fps with a maximum of 126.20fps and a minimum of 64.6fps. The minimum frame-rate is higher than the GTX 285's average, and the 67 per cent increase in average frame-rate is significant...

==Lightly Overclocked ATI 5870==
The results show that, with the same settings, the card scores an average frame-rate of 65.84fps with a maximum of 136.47fps (we can kind of ignore this as it's the first frame) and a minimum of 40.40fps - rising to 48.40fps on the highest of three runs.

==5970==
Average frame-rate increases to 99.79fps with the dual-GPU card, beating out Fermi handily. Maximum frame-rate is 133.52fps and minimum is 76.42fps. It's hard to beat the sheer grunt of AMD's finest, clearly.

Even after taking into account the Nvidia-favoring of FarCry2, the results are not too shabby...in line with what we've been expecting - that the GF100 is faster than the 5870. It will most likely be faster in other games, but to a smaller degree... Now the only question is how much it will cost...
 

Just wait till then can get the G92 down to the 28nm process bawhaha. =p


Yeah look how great that turned out lopped frame rates in half and produced pretty much the same looking game without a magnifying glass and still shots you could never tell the difference.

Until i see a benchmark by 3rd party like tom's or anandtech, hardopc etc imma just say everything is a lie.
 
Zirbmonkey wrote :

From the Link: Farcry 2 testing

Nvidia Fermi GF100
Min = 64.60 fps
Avg = 84.05 fps
Max = 126.20 fps

AMD Radeon HD 5970
Min = 76.42 fps
Avg = 99.79 fps
Max = 133.52 fps

Based on average fps scores, the 5970 is 19% faster than Nvidia's GF100. So, it doesn't beat the 5970.

...As for prices, we'll wait and see. Obviously ATI is the one setting the prices this round. Nvidia has a hard sell to make with when their chips are 40% larger (and therefore at least 40% more expensive to manufacture).

_______________________________________________________________________________
You are missing the point gamerk36, the scores presented showing the average fps of the 5970 being faster than the GF100 comes directly from the benchmarks presented in the article and are the only benchmarks that are relevant to this discussion. Bringing in benchies from other games has no place given the context of this thread.

As far as a single card trading blows with a dual gpu car, that is a result of the difference between ATI's and nVidia's strategies. ATI chose to make dual-gpu card the high end whereas nVidia chose a single monolithic gpu for there high end. And, given the GF100 is supposed to the next high end gpu from nVidia, the fact that one card is dual gpu and the other single, has nothing to do with it. Both the 5970 and GF100 are high end card competing in the same market segment.
 


Then point it please, with a link to those benchmarks.
 
Ionno why people would think that the GF100 would be priced at the 5970 when it's 40% larger not 100% larger that still 2 chips vs 1, only diff is that ATI has smaller chips that are easiler to make and have been making them longer so they have high yeilds all accross the board. As Nvidia get better at making the monstorsity that is GF100 it should be priced under a 5970 basiclly making it's own price braket not in direct competition but more filling the void between a 5870 and a 5970. Atleast that is my prediction.
 
From the Link: Farcry 2 testing

Nvidia Fermi GF100
Min = 64.60 fps
Avg = 84.05 fps
Max = 126.20 fps

AMD Radeon HD 5970
Min = 76.42 fps
Avg = 99.79 fps
Max = 133.52 fps

Based on average fps scores, the 5970 is 19% faster than Nvidia's GF100. So, it doesn't beat the 5970.

...As for prices, we'll wait and see. Obviously ATI is the one setting the prices this round. Nvidia has a hard sell to make with when their chips are 40% larger (and therefore at least 40% more expensive to manufacture).

but nvidia will also have dual card
http://www.fudzilla.com/content/view/17291/1/
 
I'm sure they'll try.

Given that they have heat problems from a single chip GF100 that's already drawing 300W, I'd be very interested to know how they think they can power the second chip, and remove the additional heat it generates.

Nvidia might just develop the first triple slot GFX using 2x6-pin and 2x8-pin connectors to a single card. If anyone though the 5970 was huge, the nvidia double-chip is going to have to be a behemoth.

yes but they will need to downclock or reduce the sp to have a 300w tdp.
 
Who wants to bet this card is going to be further delayed or if it is released without another delay, it will be buggy with many unsatisfied users returning it to Newegg?

How big is this card supposed to be, about 16 inches? Just my impression from the hardware forums chatter that Nvidia is hurrying this product because of the initial delay or the yield was not as expected and I would not be surprised if a few minor details in quality testing fall though the cracks.

I hope Nvidia learned from their $200 million mistake in 2008 when they had their biggest recall from failing GPUs.

On a sidenote, if these videocards keep evolving at this rate, they'll be the size of motherboards in a few years!
 
Are you dense?

Read what I already wrote: "Given that they have heat problems from a single chip GF100 that's already drawing 300W"

So how exactly do you trim a single chip already at 300W to have a low enough TDP to fit two of them on the same card, and still hit 300 TDP? At this point I'd like to remind you that both the GF100 and 5970 use 6+8-pin connectors, and therefore pull roughly the same available wattage. However the 5970 runs cool with room to OC, while the GF100 already runs too hot.

Perhaps they could cut the chip in half, then then just paste them in two separate locations?


Let's just get a couple things clear, we don't KNOW about how much heat or wattage this thing will take yet.

All we have are educated guesses.

Don't get me wrong I personally beleive Fermi will be hot, use lots of power, and be an epic fail for the majority of cosumers, that is: those who don't own 1200 W power supplies and use water cooling as a standard. But still nothing is certain until it is released and properly reviewed by a reputable source.

Nvidia is very shady, and in my opinion arrogant, but one thing they don't do often is release products that are just awful ( ATI 2900XT anyone ?) So let's wait and see.


On a different note, great choice on your power supply monkey.
 
Are you dense?

Read what I already wrote: "Given that they have heat problems from a single chip GF100 that's already drawing 300W"

So how exactly do you trim a single chip already at 300W to have a low enough TDP to fit two of them on the same card, and still hit 300 TDP? At this point I'd like to remind you that both the GF100 and 5970 use 6+8-pin connectors, and therefore pull roughly the same available wattage. However the 5970 runs cool with room to OC, while the GF100 already runs too hot.

Perhaps they could cut the chip in half, then then just paste them in two separate locations?

yes i understand but who want all that power from that card he will need a 1000w PSU and special water cooling, and that is a ultra high end segment and its all about who has the faster card.
 
How many of you armchair engineers are actually engineers, because these statements about the cards power usage -heat and thermal details are ASSumptions. So stop whining what Nvidia can and can't do until they do it. Show me a link where a Nvidia engineer is saying the card is already at the 300 watt limit and is going to need a 1000 watt psu, lol. Ever here of better efficiency ?
 
I believe the answer is: 0.00001% of the market.

I apologize if I'm missing an extra zero or three.
Like the 5970 buyers make up the majority of the market?

Really the only thing confirmed by nvidia is that the card runs hotter then any gpu before it, that's fine as long as it can operate at high temps without failure for long periods of time imo.

ionno where ppl are pulling outer their asses 300w 300tdp 300lmnop and making vast assumptions about things the nvidia's engineering team have been mulling over for months now thinking they know more.

On a side note about the dual gpu card i could see them making a tri-sloted beasts that takes 1 slot above and below it in a dual pbc sort of card then i'll rage about tri slot cards then somehow 2 years later it becomes the standard (7950GX2 anyone)
 
this is just speculation about tdp, we need more details to make final conclusion and more game benches to see a real difference in performance.
 
Ok the gtx 280 was 236w TDP, and had less than half the transistors of Fermi.

Stands to reason that Fermi is *at least* 236w, and more likely north of 250w. There are no magic tricks, the card has been seen with 1 8-pin and 1 6-pin connector = 225w minimum right now.

If it is 280w like Charlie says, that's it. No point in making a dual card below 300w when you already have a single at 280w.

Even at 250w it is pretty unlikely that any cut down dual-gpu card under 300w will increase performance.

And dont say 'but they did it last time with the gtx295'. As I've been saying for the past 6 months, Nvidia could only make a dual card because they went from 65nm to 55nm between the gtx280 and gtx295. That will not be happing this time around.
 
At this point a dual GPU G100 card is basically impossible based on every rumor, specification, or published article that we have ever read about Fermi. Fermi WILL need a good deal more power than the 5870, that is a fact just based on the known specifications. How much, we don't know. The problem is that the 5870 requires so much power that it can barely make a dual GPU card, and must be heavily underclocked.

That said, nVidia doesn't need nor will benefit from a dual GPU card. They will have the fastest single GPU card which should beat out 2 5970s when SLI'd. That is not a horrible position, but its not great either. The problem comes if ATI releases a high clocked 5890 that comes close to the G100, then both the G100 and the GTX 360 card will be basically redundant and pointless at that point.
 

Er... it's "simulated" because they benchmarked the 5870 in a different, albeit extremely similar, system. They didn't get NVIDIA's test rig to use, they just got the system specs and went out and bought the components. They benched a GTX285 and came within a few % of NVIDIA's results so give a few % leeway for the 5870 and you're looking at a reasonable "simulated" result. It's not some wild guess, read the article.
 


More reason to not take the fermi benchmark seriously then, since those results were given by Nvidia...
 
Did you even read what I typed? The GF100 results are NVIDIAs. The GTX285 results are NVIDIA's. BUUUUUT... Hardware Canucks built a system as close as possible to the test rig, benchmarked a GTX285 on it and got results within a few % of NVIDIA's GTX285 results. They then benchmarked a HD5870 and put those results in with the two results from NVIDIA. Since it was not the exact same system, they stated that the results were "simulated."
 


If I recall the FX 5800 WAS horrible, they pushed power consumption & heat to beat the 9700.
 


My point was about Nvidia's testing rig, not the "simulated" HD5870 results. It doesn't mention the specification of Nvidia's testing rig, so they they got their GTX285 results close to Nvidia's result and "assumed" that was similar to Nvidia's testing rig. Nvidia's testing rig maybe be heavily optimized for their own gpu, especially fermi. Since GTX285 is not the same architecture as fermi, fermi may get even bigger boost in test results. Since we have to make too many assumptions about Nvidia's testing rig, its not a good test results to be used for comparison against anything.
 
Status
Not open for further replies.