Nvidia Fermi GF100 Benchmarks (GTX470 & GTX480)

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
http://www.tomshardware.com/reviews/geforce-gtx-480,2585.html


"Crysis is perhaps the closest thing to a synthetic in our real-world suite. After all, it’s two and a half years old. Nevertheless, it’s still one of the most demanding titles we can bring to bear against a modern graphics subsystem. Optimized for DirectX 10, older cards like ATI’s Radeon HD 4870 X2 are still capable of putting up a fight in Crysis.

It should come as no shock that the Radeon HD 5970 clinches a first-place finish in all three resolutions. All three of the Radeon HD 5000-series boards we’re testing demonstrate modest performance hits with anti-aliasing applied, with the exception of the dual-GPU 5970 at 2560x1600, which falls off rapidly.

Nvidia’s new GeForce GTX 480 starts off strong, roughly matching the performance of the company’s GeForce GTX 295, but is slowly passed by the previous-gen flagship. Throughout testing, the GTX 480 does maintain better anti-aliased performance, though. Meanwhile, Nvidia’s GeForce GTX 470 is generally outperformed by the Radeon HD 5850, winning only at 2560x1600 with AA applied (though it’s an unplayable configuration, anyway)"




"We’ve long considered Call of Duty to be a processor-bound title, since its graphics aren’t terribly demanding (similar to Left 4 Dead in that way). However, with a Core i7-980X under the hood, there’s ample room for these cards to breathe a bit.

Nvidia’s GeForce GTX 480 takes an early lead, but drops a position with each successive resolution increase, eventually landing in third place at 2560x1600 behind ATI’s Radeon HD 5970 and its own GeForce GTX 295. Still, that’s an impressive showing in light of the previous metric that might have suggested otherwise. Right out of the gate, GTX 480 looks like more of a contender for AMD's Radeon HD 5970 than the single-GPU 5870.

Perhaps the most compelling performer is the GeForce GTX 470, though, which goes heads-up against the Radeon HD 5870, losing out only at 2560x1600 with and without anti-aliasing turned on.

And while you can’t buy them anymore, it’s interesting to note that anyone running a Radeon HD 4870 X2 is still in very solid shape; the card holds up incredibly well in Call of Duty, right up to 2560x1600."



It becomes evident that the GTX470 performs maybe 10% or less better than the 5850 on average, and the GTX480 performs maybe 10% or less better than the 5870 on average. Yet the power consumption of a GTX470 is higher than a 5870, and the GTX480 consumes as much power as a 5970.

The Fermi generation is an improvement on the GTX200 architecture, but compared to the ATI HD 5x00 series, it seems like a boat load of fail... =/



------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Original Topic:

http://www.hexus.net/content/item.php?item=21996


The benchmark is in FarCry2, which is an Nvidia favoring game.


==Nvidia GTX285==

...What you see here is that the built-in DX10 benchmark was run at 1,920x1,200 at the ultra-high-quality preset and with 4x AA. The GeForce GTX 285 returns an average frame-rate of 50.32fps with a maximum of 73.13fps and minimum of 38.4fps. In short, it provides playable settings with lots of eye candy.

==Nvidia Fermi GF100==
Take a closer look at the picture and you will be able to confirm that the settings are the same as the GTX 285's. Here, though, the Fermi card returns an average frame-rate of 84.05fps with a maximum of 126.20fps and a minimum of 64.6fps. The minimum frame-rate is higher than the GTX 285's average, and the 67 per cent increase in average frame-rate is significant...

==Lightly Overclocked ATI 5870==
The results show that, with the same settings, the card scores an average frame-rate of 65.84fps with a maximum of 136.47fps (we can kind of ignore this as it's the first frame) and a minimum of 40.40fps - rising to 48.40fps on the highest of three runs.

==5970==
Average frame-rate increases to 99.79fps with the dual-GPU card, beating out Fermi handily. Maximum frame-rate is 133.52fps and minimum is 76.42fps. It's hard to beat the sheer grunt of AMD's finest, clearly.

Even after taking into account the Nvidia-favoring of FarCry2, the results are not too shabby...in line with what we've been expecting - that the GF100 is faster than the 5870. It will most likely be faster in other games, but to a smaller degree... Now the only question is how much it will cost...
 
People seem to be making up a lot of random crap in this thread. I can almost guarantee a dual-GPU Fermi wouldn't require anymore special cooling than a 5970 would assuming it was kept within specs. And even if pushed to it's max power usage wouldn't be that big of a deal. Any gaming case worth it's salt could handle it with purely air cooling. It's not like multi-GPU cards or SLI/X-fire didn't already need special consideration.......

The 5970's can use up to 400W, but everyone seems to forget that...... A dual-GPU Fermi could probably use just over 500W fully maxed out if they didn't cut anything back in either GPU, but it's bloody likely if they made a dual-GPU card, they WOULD cut out parts of the chip, disable others, and throttle back clocks to stay within the 300W limit much like ATI did with the 5970. Then they'd follow suit and as ATI did as well, just make the card easily overclockable and have the ability to use it's max wattage requirement.

That is how you get around the 300W limit. Design the card for more, and throttle back the consumer version, but make it EASY for consumers to just throttle it back up. It's like Spectrum Spreading on computer motherboards which reduces EMI but causes stability issues, especially for overclockers. Standards say manufacturers have to have it on their boards, so they put it on all motherboards, but then they just give the user the option to disable it in BIOS (usually).

Oh, and to the person who said a Fermi card would be 16 inches, the ones shown at CES were pretty much the exact same length as a 5870....




/end thread
 


Well they accomplish what nVidia wants from these early 'leaks' / 'previews' in that anyone who had been waiting will still wait because of a shaky cam number promising the theoretically possibility that something might come that might possibly compete with ATi's high end. That's all the loyal are wanting, simply a glimmer of hope to keep from going in and buying ATi right now.

And even if it's delayed for 'mass availability' until April/May then they can say in march, oh here it is, and we'll get some to you in 4-8 weeks, have faith it's worth the wait, and they could likely string along 90% of the people waiting right now, all the way 'til the summer (9 months), on just the promise of these benchmarks without ever really doing a real head-to-head for months.
 
Retard hotline!

[Sabot00]Here to help all your autistic friends!
PCI-E (kept to 1.x specs) = 75W
6-Pin = 75W
8-Pin = 150W
75 + 75 + 150 = 300!

Ok, fine, because you require it so specific, I'll clarify. If ATI just straight up used two of the 5870 GPUs/boards (not sure exactly how the 5970 was built, not sure if they double PCB'd or what they did, thought it was double PCB though), without gimping them in any way, the card would require something like 376W or so of power, nearly 400W.

As I explained, ATI just cut back the GPUs to keep it under the 300W limit but then gave you overclocking headroom and the ability to use up to 294W....... congratulations at reading comprehension fail.

Assuming the Fermi architecture is more advanced and efficient than ATI's like nVidia says it is, it should be able to do the same, push it to exactly 300W and handily beat the 5970.

Clear enough now?
 


It's not uncommon for either company, FX5800 was late, NV47 errr... G70 was late, R520 was late, R600 was late.

It's not just about DX11, it's the fastest DX10.1, DX10, DX9(SM3.0), DX9(SM2.0), DX8.1,... and OpenGL card.
DX11 has no bearing on Fermi being late anymore than DX10 has bearing on R600 being late, and doesn't negate either of their counterpart's success at getting their products to market.

And who says it's late? nVidia;

http://www.anandtech.com/video/showdoc.aspx?i=3651

I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "faqing hard".

It's Late, no doubt about that, everyone but the Fanbois know that. :pfff:


 


But its not more efficient, we've already read that its a power hog, makes lots of heat, and is big.
 


I guess I have the balls to tell you in pers.... I mean, on the forums. I think it's just your face that does it. Or is that not you in the picture? 😛



.... all of those mean almost nothing solid or quantifiable at this point about efficiency.

It can be as large as the grand canyon, require 500 nuclear power plants for power, and make tons of heat, but for all we know be 99.99% efficient. And as far as we know, it COULD be 99.99% efficient, and maybe if it wasn't it would melt the inside of your case and all of your components....... you have no idea, I have no idea, and nor does anyone else on the web at this point 😛

And when I said "advanced and efficient", I meant efficient at handling data due to it's new architecture aka faster... not efficient with heat production.
 

Well he was "told" the spec. I guess I'll take back my word on not using that chart for comparison, though looking through that thread you posted, I'm seeing different results and numbers. So we'll have to wait till near release time.
 


He was told the spec and when he built the system he got almost the same results. I don't see why the tinfoil is needed here. Sure, if he got 10% higher or lower then you can ask questions, but he didn't, he got within 2%.


I couldn't agree more. We currently have the results for 2 games and not even a full range of resolutions. We can't assume Fermi will perform the same percentage higher than the 5870 in every game, so unless we talk specifically about FC2 or whatever the other game was we're just speculating.
 


That doesn't mean it's more efficient. It can be new and still not as efficient. Considering how inefficient the g200 chip was in comparison to the rv770, Fermi has a lot of catching up to do.

Is it 50% bigger than Cypress? Yes it is. Does it have 50% more transistors? Yes it does.

Is it 50% faster than the 5870? No it isn't.

While it has closed the gap from g200 to rv770, it still isn't 'more efficient' according to the Far Cry 2 benchmarks.
 

Well I'm just cynical when it comes to corporations touting their top line products, especially companies like Nvidia. Can you blame me? :kaola:
 

Fermi was built for GPGPU, so it may be more efficient at that. One result does not indicate how well it will do in other workloads. G200 does (at best) ok against a 5850 in games considering how old it is, but give it some ray tracing to do and Cypress walks all over it.


It's entirely understandable to be cynical considering NVIDIA's past antics, but in this case the evidence just doesn't lend to NVIDIA messing with the results IMO.
 
Well, since this is a PR thing, Im reminded of those PR special cases for SLI Fermi we read about awhiles back.
Any ol case it seems wont work for these cards, and wont be that easy it seems, unless the companies making such cases side by side with nVidia is just spouting PR, where nVidias PR isnt?

Im confused now.... heheh
 
Ok I just read through the article again and this had been added :-

As an update, an NVIDIA spokesperson contacted us and said that 'Well, all I can say is that it was not final and not running final clocks. Final perf will be higher'.

Not final and not running final clocks? That is pretty strange considering the part is supposed to be in production right?

So the options on that?

1) This spokesperson is lying.
2) Fermi is going for A4, in which case it certainly isn't 'ramping hard' and won't be available March.
3) They deliberately downclocked the part for some reason (think heat). Even if this is true, I don't see how they can be in production unless they intend to add some extreme cooling solution to all the cards.

Maybe I missed something?
 

4) The cards they were using were early ES units, tarted up for the show?
 

Yes , they can easily bios flash with the decided upon "final" clocks by the individual card makers when they slap their stickers on.
 

Not if other board components change. A BIOS flash would require that almost nothing has changed otherwise it will likely bork the card.
 


That still doesnt make sense. Why are they downclocking final silicon gpu's? I mean its possible, I just don't see why Nvidia would show worse results than they had to.
 
Imo the most likely possibility is it was running the 512sp, max clock version.

And it probably will gain some fps between now and release, but that will be down to drivers. Add 10% absolute best case to what you see now and that might be what we see in March.
 

Fire up gpuz your bios under the main model will have sub vendor xfx or whomever, thats flashed by them the card maker. Its also why identical non o/c models have slight 5 or 10mhz difference from one another. They are not master flashed, or not in all cases .
 


I think you sort of answered your own question there, they don't have to show the best that they have ( :??: ) just something that's good enough to keep people interested.
 
Status
Not open for further replies.

TRENDING THREADS