Nvidia Fermi GF100 Benchmarks (GTX470 & GTX480)

Status
Not open for further replies.
http://www.tomshardware.com/reviews/geforce-gtx-480,2585.html


"Crysis is perhaps the closest thing to a synthetic in our real-world suite. After all, it’s two and a half years old. Nevertheless, it’s still one of the most demanding titles we can bring to bear against a modern graphics subsystem. Optimized for DirectX 10, older cards like ATI’s Radeon HD 4870 X2 are still capable of putting up a fight in Crysis.

It should come as no shock that the Radeon HD 5970 clinches a first-place finish in all three resolutions. All three of the Radeon HD 5000-series boards we’re testing demonstrate modest performance hits with anti-aliasing applied, with the exception of the dual-GPU 5970 at 2560x1600, which falls off rapidly.

Nvidia’s new GeForce GTX 480 starts off strong, roughly matching the performance of the company’s GeForce GTX 295, but is slowly passed by the previous-gen flagship. Throughout testing, the GTX 480 does maintain better anti-aliased performance, though. Meanwhile, Nvidia’s GeForce GTX 470 is generally outperformed by the Radeon HD 5850, winning only at 2560x1600 with AA applied (though it’s an unplayable configuration, anyway)"




"We’ve long considered Call of Duty to be a processor-bound title, since its graphics aren’t terribly demanding (similar to Left 4 Dead in that way). However, with a Core i7-980X under the hood, there’s ample room for these cards to breathe a bit.

Nvidia’s GeForce GTX 480 takes an early lead, but drops a position with each successive resolution increase, eventually landing in third place at 2560x1600 behind ATI’s Radeon HD 5970 and its own GeForce GTX 295. Still, that’s an impressive showing in light of the previous metric that might have suggested otherwise. Right out of the gate, GTX 480 looks like more of a contender for AMD's Radeon HD 5970 than the single-GPU 5870.

Perhaps the most compelling performer is the GeForce GTX 470, though, which goes heads-up against the Radeon HD 5870, losing out only at 2560x1600 with and without anti-aliasing turned on.

And while you can’t buy them anymore, it’s interesting to note that anyone running a Radeon HD 4870 X2 is still in very solid shape; the card holds up incredibly well in Call of Duty, right up to 2560x1600."



It becomes evident that the GTX470 performs maybe 10% or less better than the 5850 on average, and the GTX480 performs maybe 10% or less better than the 5870 on average. Yet the power consumption of a GTX470 is higher than a 5870, and the GTX480 consumes as much power as a 5970.

The Fermi generation is an improvement on the GTX200 architecture, but compared to the ATI HD 5x00 series, it seems like a boat load of fail... =/



------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Original Topic:

http://www.hexus.net/content/item.php?item=21996


The benchmark is in FarCry2, which is an Nvidia favoring game.


==Nvidia GTX285==

...What you see here is that the built-in DX10 benchmark was run at 1,920x1,200 at the ultra-high-quality preset and with 4x AA. The GeForce GTX 285 returns an average frame-rate of 50.32fps with a maximum of 73.13fps and minimum of 38.4fps. In short, it provides playable settings with lots of eye candy.

==Nvidia Fermi GF100==
Take a closer look at the picture and you will be able to confirm that the settings are the same as the GTX 285's. Here, though, the Fermi card returns an average frame-rate of 84.05fps with a maximum of 126.20fps and a minimum of 64.6fps. The minimum frame-rate is higher than the GTX 285's average, and the 67 per cent increase in average frame-rate is significant...

==Lightly Overclocked ATI 5870==
The results show that, with the same settings, the card scores an average frame-rate of 65.84fps with a maximum of 136.47fps (we can kind of ignore this as it's the first frame) and a minimum of 40.40fps - rising to 48.40fps on the highest of three runs.

==5970==
Average frame-rate increases to 99.79fps with the dual-GPU card, beating out Fermi handily. Maximum frame-rate is 133.52fps and minimum is 76.42fps. It's hard to beat the sheer grunt of AMD's finest, clearly.

Even after taking into account the Nvidia-favoring of FarCry2, the results are not too shabby...in line with what we've been expecting - that the GF100 is faster than the 5870. It will most likely be faster in other games, but to a smaller degree... Now the only question is how much it will cost...
 
Another thing I notice is they have the Physics set to very high which would further tilt the results to the Nvidia based card with the use of the Ageia physx engine -plus who knows if Nvidia had a seperate physx card installed as well or not to skew the result - I'll wait for some real benches to see what the real performance winds up being !!
 
looks about right.

one test is hardly conclusive, but i think its probably going to be about right. better performance than a 5870, though probably not enough to justify the extra cost (fair enough, we dont know the cost yet, but im sure it will be much higher).

given that fermi is also unlikely to OC as well (heat output) it doesnt look so great right now.

we'll see though.
 
i dont think it can be though. given the delays, die size and timing they won't be able to price competitively with ATI.

especially since ATI will almost certainly lower the price of the 58xx cards when fermi releases.

thats exactly why i don't see fermi being successful (relative the the 5870 at least).
 


Please show us how do you know these benchmarks are done with the GTX360. From what I know nVidia doesn't have a naming scheme yet.
 
I never doubted Fermi will be faster than 5870. However the question is not whether but BY HOW MUCH.

Giving the size and price it will come (probably around 600 if not more) it should be at least 50% faster or near 5970 to be able to sell at that price. If its only 20% faster, then its a loss, as AMD will come out with updated Cypress in May-June time (could be even earlier) and that will be 285/4890 all over again. Nvidia will have about 1-2 months to sell the card then it will be getting into NON profit area again.

I really hope Fermi is 50% faster than 5870 (and I speak on average). This will probably drive prices of 5870 to $350 and 5970 to probably $500 - that is if nVidia can price Fermi at $500 which is so highly doubtful
 
Nvidia aimed for the 5970 but fell quite a bit short. If 40% faster in Far Cry 2 is the best it can do (and surely Nvidia would show us the best?), then that is a long way short of the 5970.

It might catch it in some TWIMTBP games and games with heavy tessellation (assuming the tessellation is as good as claimed), but it will fall half way between a 5870 and 5970 in most games I think. If that ends up true, it's going to be a hard sell vs the 5970 at the same price.

Nvidia might just decide that being fastest in the 'ultra enthusiast' top end sli market is good enough this series. 3x GF100's should beat 2x5970's...maybe, and if you have a nearby nuclear power plant.
 
Look, we know nothing from this. The best new stuff is the tesselation stuff, and thats being debated as to its scalability. We simply dont know enough.
Totally new arch, new approach, doing double triangle setup (actually doing 4x, but it being halved by using only half hot clock speeds). We have no idea how itll perform in bungholios compared to real world. I point to the past, and to physx as well.
Cant compare it to last gen, cant compare it to ATI, we know very little more than we did before, other than maybe itll perform like most has guessed it would anyways
 
I actually expect a $350-$400 price point. Based on how they've re-designed the chip (the latest anandtech article), it should also be far cheaper to make lower-end parts by simply cutting the excess hardware, so I'm just as interested in the 360GT/GTX vs the 5850/5770...
 
Well, Im thinking the 360 will be a full chip, just neutered down. I expect there to be plenty of them mas well, because the top cards will be hard to come by, if rumors are true, plus, unlike last gen, their other card, non gpu cards, thier failure rates didnt enter in to the highest end, as they has a highest end, unlike this gen, where the highest end chips have to be used for both the first and second iterations for their gpgpu chips, as the high end is 448, not 512, which points to how hard a full chip is going to be, plus, it uses everything from 448 on up for both the highend and second card for the gpgpu, making for even fewer for gpu usage only.
In other words, the 512 cards, for gpu, will be in short supply
 
Status
Not open for further replies.