Nvidia Fermi GF100 Benchmarks (GTX470 & GTX480)

Page 11 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
http://www.tomshardware.com/reviews/geforce-gtx-480,2585.html


"Crysis is perhaps the closest thing to a synthetic in our real-world suite. After all, it’s two and a half years old. Nevertheless, it’s still one of the most demanding titles we can bring to bear against a modern graphics subsystem. Optimized for DirectX 10, older cards like ATI’s Radeon HD 4870 X2 are still capable of putting up a fight in Crysis.

It should come as no shock that the Radeon HD 5970 clinches a first-place finish in all three resolutions. All three of the Radeon HD 5000-series boards we’re testing demonstrate modest performance hits with anti-aliasing applied, with the exception of the dual-GPU 5970 at 2560x1600, which falls off rapidly.

Nvidia’s new GeForce GTX 480 starts off strong, roughly matching the performance of the company’s GeForce GTX 295, but is slowly passed by the previous-gen flagship. Throughout testing, the GTX 480 does maintain better anti-aliased performance, though. Meanwhile, Nvidia’s GeForce GTX 470 is generally outperformed by the Radeon HD 5850, winning only at 2560x1600 with AA applied (though it’s an unplayable configuration, anyway)"




"We’ve long considered Call of Duty to be a processor-bound title, since its graphics aren’t terribly demanding (similar to Left 4 Dead in that way). However, with a Core i7-980X under the hood, there’s ample room for these cards to breathe a bit.

Nvidia’s GeForce GTX 480 takes an early lead, but drops a position with each successive resolution increase, eventually landing in third place at 2560x1600 behind ATI’s Radeon HD 5970 and its own GeForce GTX 295. Still, that’s an impressive showing in light of the previous metric that might have suggested otherwise. Right out of the gate, GTX 480 looks like more of a contender for AMD's Radeon HD 5970 than the single-GPU 5870.

Perhaps the most compelling performer is the GeForce GTX 470, though, which goes heads-up against the Radeon HD 5870, losing out only at 2560x1600 with and without anti-aliasing turned on.

And while you can’t buy them anymore, it’s interesting to note that anyone running a Radeon HD 4870 X2 is still in very solid shape; the card holds up incredibly well in Call of Duty, right up to 2560x1600."



It becomes evident that the GTX470 performs maybe 10% or less better than the 5850 on average, and the GTX480 performs maybe 10% or less better than the 5870 on average. Yet the power consumption of a GTX470 is higher than a 5870, and the GTX480 consumes as much power as a 5970.

The Fermi generation is an improvement on the GTX200 architecture, but compared to the ATI HD 5x00 series, it seems like a boat load of fail... =/



------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Original Topic:

http://www.hexus.net/content/item.php?item=21996


The benchmark is in FarCry2, which is an Nvidia favoring game.


==Nvidia GTX285==

...What you see here is that the built-in DX10 benchmark was run at 1,920x1,200 at the ultra-high-quality preset and with 4x AA. The GeForce GTX 285 returns an average frame-rate of 50.32fps with a maximum of 73.13fps and minimum of 38.4fps. In short, it provides playable settings with lots of eye candy.

==Nvidia Fermi GF100==
Take a closer look at the picture and you will be able to confirm that the settings are the same as the GTX 285's. Here, though, the Fermi card returns an average frame-rate of 84.05fps with a maximum of 126.20fps and a minimum of 64.6fps. The minimum frame-rate is higher than the GTX 285's average, and the 67 per cent increase in average frame-rate is significant...

==Lightly Overclocked ATI 5870==
The results show that, with the same settings, the card scores an average frame-rate of 65.84fps with a maximum of 136.47fps (we can kind of ignore this as it's the first frame) and a minimum of 40.40fps - rising to 48.40fps on the highest of three runs.

==5970==
Average frame-rate increases to 99.79fps with the dual-GPU card, beating out Fermi handily. Maximum frame-rate is 133.52fps and minimum is 76.42fps. It's hard to beat the sheer grunt of AMD's finest, clearly.

Even after taking into account the Nvidia-favoring of FarCry2, the results are not too shabby...in line with what we've been expecting - that the GF100 is faster than the 5870. It will most likely be faster in other games, but to a smaller degree... Now the only question is how much it will cost...
 

nfail

Distinguished
Mar 5, 2010
124
0
18,690
if ATI is better at 150 watts, 200 watts, 250 watts, 300 watts - ATI will be better at any watts you find.

by most accounts fermi is hot and power hungry, ignoring the pci-e standard isnt going to change the fact that it is an inferior chip performance per watt wise.

so they can make a dual fermi at 600 watts, and ATI can increase the voltage on the 5870, overvolt and overclock it so it goes all the way up to 300 watts, then add another 1gb of RAM, stick 2 of them onto a huge pcb, label it the 5999 ULTRA XXX EDITION - and it will be faster than the same dual Fermi.

if nVidia can't beat ATI below 300 watts, they can't beat ATI above 300 watts either. is this hard to understand?
 

wh3resmycar

Distinguished


for an ati fan boy, you aren't aware of the official ati over-volt tool?
 

nfail

Distinguished
Mar 5, 2010
124
0
18,690
i'm not an ATI fanboy i'm just pissed at nVidia for sucking so bloody bad recently.

ATI can offer tools to overclock their cards if they want, but they have to be <300 watts OUT OF THE BOX in order to meet the pci-e standard. nobody is saying nVidia cant do that but their dual gpu card has to be <300 watts out of the box, just like the 5970 is.
 


Can't do 3D surround with one card, need to spend another $600 for a second one. :lol:
But none of those require Fermi, and two GTX280 might do some of those better.
 
The PCI-SIG 300W spec is not an issue for exotics and home-built, where it becomes an issue is for anything you want as a standard issue rig or something Dell-xotic like an Alienware setup wherethey still adhere to the standards.

This isn't an issue for home-built or for people who make cards like the ASUS ARES or MARS, they can easily break spec as long as they are stable with all the power coming from elsewhere.

IMO AIBs shouldn't build 'to spec' but 'at least spec' meaning they don't limit the card, they make sure at least it meets spec, but don't cut corners because you know the card doesn't have to pull 300W 'under normal working conditions', this gives people headroom to tweak.

As mentioned before in the other thread, I don't think any true enthusiast will care about 500W card as long as the performance lives up to that requirement, which is why all the pre-R600 power whining was silly until you see the performance.

It may not become an OEM darling, but really who cares, most OEM cards are castrated versions of the retail ones anyways, and any enthusiast worth their salt will try to buy bare bones and install their own.
 

RealityRush

Distinguished
Oct 14, 2009
484
0
18,790

For the moment, yes, they are. When Fermi is released, will they still be? Maybe, we'll have to see.



No, by one or two accounts it is, namely Charlie. Ignoring the PCI-E standard is done all the time, ATI is doing it on their own dual-GPU cards (see the ARES 5970 :p), so who cares if nVidia does it? Why does it piss you off so much that nVidia does the same thing ATI is doing?



....... no, it wont. They can do that with a 5970, and they are as I already mentioned with the ARES 5970 cards. Whether that will be faster than the dual-GPU Fermi card remains to be seen, but I doubt it would be if a 480 is faster than a 5870. A 480x2 card should beat a 5870x2 card. I am speculating a 480 will be faster than a 5870 of course, benchmarks seem to indicate the 480 either matches or beats the 5870.



They will be beating ATI below 300 watts......... a 480 allegedly beats a 5870 and a 470 allegedly beats a 5850, so it stands to reason that they can in fact make a dual-GPU card that will beat the 5970.

And again, if you are an enthusiast, who honestly cares how much power it uses, do you buy OEM computers or something? I've got a 1050W power supply right now itching for something power hungry to actually make use of it, I might just buy the MARS Fermi dual-GPU super ultra OMGWTFBBQ card for the hell of it with the extra money from my job lying around, and be future proofed for the next 3-5 years o_O
 

daedalus685

Distinguished
Nov 11, 2008
1,558
1
19,810


There are way too many variables that are unknown for one to assume that. If watt for watt the 5000 series does indeed end up faster than the 400s it will be very difficult to create a dual Fermi card faster than some of the exotic 5970s we will soon see.

There is a chance the 470 will be priced to go head to head with a 5850... But I really doubt it. Currently it looks like we will see the 470 going against the 5870, akin to the 260/4870. Thus it will likely be as meaningless that the 470 beats the 5850 as the 5850 beating the gt240. What arbitrary numerical value a card has from the top of a lineup doesn't really stack up compared to how they line up in terms of price (which I admit I could be pleasantly surprised about).
 
I thought the HD5870 used 183Watts whilst the HD5850 used closer to 150. If ATI creates (or some manufacturer) an HD5950 and HD5930, this'll open up the route for many people who don't have an X58 motherboard.

Also, there's still the HD5890 to be considered, which will hopefully not be imaginary.
 

RealityRush

Distinguished
Oct 14, 2009
484
0
18,790

I'm not talking about price, I'm talking performance.

If a 480 beats a 5870, regardless of how much it costs opr how much power it uses, then a MARS dual-GPU 480 SHOULD beat an ARES dual-GPU 5870 (5970), both cards will have their max clocks, memory, everything.
 

notty22

Distinguished


Correct, and the 5770 uses 101~ BUT, the 5830 uses 154, proving? showing why watt/ performance is not the end all performance metric.
Factor it in somewhere, just the same as done with Intel VS AMD cpu's, or Intels own latest quads I7 920(130 watts vs 95) for I5 750.
 


Based on what Fermi-scaling information though?

The GTX480 may be a heck of a card, but Fermi may have terrible (or awesome) scaling and not scale in the same fashion as Cypress. So it's still too early to even comment on that also.
 

daedalus685

Distinguished
Nov 11, 2008
1,558
1
19,810


I know what you are talking about.. But power considerations play a huge role in how fast they can make these cards. Every watt of power is a watt of heat. A MARS card would be on the order of 500W if some of the 480 rumours are true. That would stretch the limits of what modern technology can cool and still be considered a video card.

If a 480 is already consuming as much power as a 5970 (unlikely but that is the high end of the rumours) one could trivially produce a 1ghz clocked 5970 for the same power budget as a dual 480 (and I am sure there are enough cherry picked GPUs that reach 1ghz out there to produce a 1:1 ratio of 1337 edition 5970s to Mars 480s), maybe even higher clocks. This would require no more exotic cooling than the MARS would require and would likely still be faster. Performance/watt is a huge metric for how fast a board maker can make a crazy card, price totally ignored.

The 480 cannot break the laws of physics. It may well be the fastest GPU on the planet but with the 5970 super editions already claiming 900+ghz stock clocks on some of them a clever engineer is going to produce a 5000 series card for every watt a 400 series card can reach. If the 5000 series cards are indeed that much more efficient they will likely always perform better (obviously for a huge cost).

Also, you claim that if a 480 beats a 5870 the 480 mars should beat an ares.. yet it is entirely likely the ares will be a highly over clocked 5870 cores. We have no idea whether a 480 will beat a heavily clocked 5870 yet, only that in some form it likely beats it at stock (by at least 10%).


As to what APE is getting at, he is correct that we have no idea what the SLI scaling will be like but given the track record I think we can safely assume SLI will scale on par with crossfire.
 

daedalus685

Distinguished
Nov 11, 2008
1,558
1
19,810


Nah, nothing is certain. Though the architecture is not fundamentally different than the gtx200s. I would be floored if they managed to pooch the scaling in SLI in there somewhere. The 200s scaled exceptionally well, and I would expect the same of the 400s. The extra heat from the suckers would worry me more.
 

RealityRush

Distinguished
Oct 14, 2009
484
0
18,790


I would also doubt scaling to be an issue, if anything SLI has a better track record than X-fire.

As for the clocks on an Ares 5970:
http://www.pcgameshardware.com/aid,705735/Asus-Ares-HD-5970-done-right-first-benchmarks/News/

They aren't overclocked, they are just stock 5870 clocks unlike the downcloked 5970. An Ares 5970 is just two stock clock 5870s with 2GB of dedicated VRAM to each with special cooling, that is pretty much it. And a 480x2 MARS will be the same thing I assume as the MARS 285, stock clocked 480s with more VRAM and better cooling. So if a stock 480 beats a stock 5870 as it seems it will from the currently leaked benchies, a MARS 480 will beat a ARES 5970 (again assuming scaling is fine). Worst case scenario they have the same performance (going by the nVidia leaked benchies again), at which point ASUS better price them the same or they wont sell any bloody MARS 480s.

The only real issue at this point for a MARS 480 is cooling, and cooling a 500-600W card is obviously not a simple thing, on the other hand Daedalus, it really isn't as hard as you make it sound, "stretch the limits of what modern technology can cool". I call bulls**t, considering 720,000BTU/h heat exchangers exist, I really don't think it is beyond our capacity as humans to cool a measly 500-600W. Sure, it might cost a lot, but if it beats an ARES 5970 and there's only 1000 produced, they obvious wont scrimp on the cooling. A MARS 480 doesn't have to break the laws of physics at all, it might just have to use water cooling as a standard, which is fine cause I think most people that buy those cards use watercooling anyways :p
 
As far as the dual gpu card vs single gpu card issues of the 5870 vs 295 and Fermi vs the 5970; I believe it's a fair comparison.

What I do not believe is that you can say "the 5870 is a failure because it didn't beat a dual GPU card" or "Fermi is a failure because it couldn't be a dual GPU card".

The next generation of single GPU's are "supposed" to beat the previous generation of dual-GPU cards.

However, as far as market value goes, you still will want to compare price to performance, and if the rumors of price are true, you can and should compare them to like priced products.
 


Yes, this is "illogical fan drivel". Unless you are thinking of using it on very outdated video games or on very low res monitors. No one uses x24 AA on the ATi cards on average monitors now. No one is doing x16 on newer cards either.

I imagine it might make some since if you are using a monitor that is 800x600, or maybe even 1024x768. It wouldn't be something people with HD monitors are going to use.
 

True, but that can be said of all new designs.

We certainly hope so.

I think they have their fingers in too may other pies for this to be the case, but who knows?
 


HD in my book means 1600x1000 (not sure exact res on this), 1920x1080 and 1920x1200. HD TV like resolution.

Why do you believe Nvidia would not release a CSAA level that is not practical for normal use?

Every card I've seen with AA capibility has released AA levels out of reach for the normal resolutions of the time. No one with the 5000 series cards are running at x24 on normal monitors. They aren't even using 8x super sample AA either. I don't believe anyone is running at x16 on todays hardware either, most games don't even support it, even though it is offered by the video cards.

It's not even a big deal. They add things like this to help move to it in the future. It might be useful for graphics art too, but not real time 3d video games.
 

nfail

Distinguished
Mar 5, 2010
124
0
18,690


lol dream on. all the benchmarks showing "wins" for fermi are caused by the 5870 running out of memory, and that will be solved with the 6-screen eyefinity edition card sporting 2gb.

it will never get close to the 5970 except in games that arent crossfire friendly. it will lose to the 5870 and maybe even the 5850 in games like dirt 2 and AvP. it is a disaster and its a disaster of nVidias own making. i wonder how some of you are going to cope when you see the bechmarks in 2 weeks.
 

randomizer

Champion
Moderator
I thought the 300W thing was for PCI-SIG certification, not ATX. In either case, consumers will not be looking for these certifications (be honest, have you ever looked for them?). Businesses may look for them, but those setting up massive CUDA farms will not be looking for a "300W certified A+++" badge, they will be looking for actual power consumption. They're going to be running hundreds of these things remember. There's obviously a sensible limit to how much power the card should draw to maintain compatibility with power supplies under 1kW, but 300W for a dual-GPU card of this size is going to be pushing it without scaling down performance alot. It may be better to simply buy more single cards.

EDIT: Oh look, there's another whole page...
 

RealityRush

Distinguished
Oct 14, 2009
484
0
18,790

It is a PCI-SIG certification :p

ATX generally deals with physical sizings more than anything else don't they?
 
Status
Not open for further replies.