Nvidia GTX 480 3-Way SLI Tested Against Radeons

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

dunuck

Distinguished
Apr 15, 2010
8
0
18,510
[citation][nom]spoofedpacket[/nom]Why would he even be worried about the $50/ea price difference in cards or a motherboard if looking at such a solution to begin with? Most of these comments about price hit me like someone saying "I'd buy that Ferrari but I can't afford to change the oil, so I don't know what to do!"[/citation]
Exactly!
 

jfem

Distinguished
Dec 20, 2009
288
0
18,790
Nvidia still has chance to win in these generation if they make good their lower end cards which has a larger market.
 
I got a completely different impression reading the THG summary than I did reading the original article. I think the original article summed it up nicely that triple SLI is only good for bragging rights....except for those of course who will claim they can see a difference betwenn 200 and 220 fps. I didn't see anything to change what we already know.....the 4xx rocks on DX11 and high settings, the 5870 surely is the way to go if your going to be replaying your DX10 games.
 

rubberjohnson

Distinguished
Aug 28, 2006
68
0
18,630
@dunuck

If you are one of the 0.0001% of people who use their GPU for non-gaming purposes then go ahead and buy a fermi. The problem is it's these non-gaming functions which have got nvidia into the predicament it's currently in, they aren't focussed in the area that MOST of Toms readers want - gaming performance!

I'm sorry but when people look at building a new rig they want as many FPS for the lowest $$$, currently where i live a HD5970 is roughly the same price of a GTX480 and 2 x HD5850 aren't much more than a GTX470! They may be price gouged at the moment but that's still very poor bang for your buck, without even looking at the total cost of powering and cooling the things.

Stop banging on about physx, it's dead - BFBC2 does the most amazing physics just fine on the CPU without nvidias proprietary BS. Sure it might slow down when intense stuff is going on but physx does that too! LOL !!!

That's why the ATI fanboys are braging - it's justified!
 

rubberjohnson

Distinguished
Aug 28, 2006
68
0
18,630
P.S. Did you guys read the conclusion about those high end SLI setups, they needed a i7 980X six core to get the best out of them where as the crossfire setups could make do with an OC'd quad core i5 750.

From the article (never mind the dodgy google translation) "On average, you have three GeForce GTX 480 cards are the fastest gaming system in your hands. Please note: you need a fast processor. The 6-core Core i7 980X was not a bottleneck in our benchmarks, whereas a quad-core i7 965 as was the case. Who wants more than 1500 euros to spend cards, do not mind attaching to 1000 euros to spend on a processor..."
 

ryandtw

Distinguished
Apr 20, 2010
2
0
18,510
Okay, that's just even more bad! I think NVIDIA has got to really re-work on this - maybe I hope Fermi could be in a 28nm setup, but that's not coming up just yet.

Eventually NVIDIA will resolve issues by implementing technologies that could cool down the temperatures, reduce power consumption, run better than AMD's ATI Radeon HD 5000, and maybe have it available at a lower cost...

But this is particularly targeted to the really extreme gamers, and I'm not really like that. I've got an Intel Core i7-powered custom-built computer with an NVIDIA GeForce GTS 250, and I'm playing "America's Army 3" (a free game developed by the US Army that can be obtained at Steam) with a 1280x1024 monitor on Windows 7 with blazing-fast 60fps with little or no problems, even online. Put into highest detail into "Resident Evil 5" or such lowers the framerates, but AA3 seems to be running optimally.
 
G

Guest

Guest
How can you say NVidia fail? Both ATI and Nvidia have been innovating for years so you can make your lame crisis jokes, and enjoy games well beyond the ol atari 2600. The hardware they produce is amazing in its design and advances.

The retarded comments people make over a graph is just plain sad. The fact that ATI and Nvidia strive to be the best means good news for us consumers. Sometime NVidia wears the crown, sometimes ATI. It's all good even if you think it's bad.
 

dunuck

Distinguished
Apr 15, 2010
8
0
18,510
@ RubberJohnson
Actually cuda has a lot more user base than you can imagine, but i understand your point, you are talking about gaming is what tom readers care, we will get in to that, but dont forget even if you are not into gpu computing it is nice to know that your 400/500$ GPU (which can be the more expensive part of your rig) its also capable of doing more than gaming, like accelerate photoshop work, edit/encode videos, or whatever application that take advantage of parallel computing


Ok lets talk about gaming which is what you care about

The GTX 480 cost only 100$ more than the 5870 yet it performs on average 20% faster and is some games/cases even 40% faster. The GTX 480 handles Anti Aliasing far better than any ati solution, in some games with 8/16xAA it performs on par or slightly better than 5970 (which is a dual gpu and cost 200$ more btw)

- GTX 480 is somewhere in between 5870 and 5970, like i said before it performs overall 20% faster and is some games/cases even 40% faster. Doesn't perform far behind the 5970 (dual gpu 200$ more) on average 5/15% behind in some cases it even matches the 5970


"2 x HD5850 aren't much more than a GTX470!"

WOW just wow you really are fanboy, and what is worst you dont do your homework:

GTX 470 is selling for around 350$ and the 5850 is selling for 300/320$

Soo, a GTX 470 clearly outperforms the 5850 is most tests, it performs near 5870 in some cases it gets very close to or matches it in performance. For arguments sake the gtx 470 cost only 50$ more than the 5850 yet it outperforms it and gets very close to the 5870. Dont forget the gtx 470 scales better in dual sli and has better performance on dx 11 tessellation and 8/16xAA

CPU bottleneck

That only happened when the THIRD GTX 480 was added, but again we dont think that anyone that is willing to spend 1500$ on GPUs is going to have a problem with that ;)

Physx dieing (lol)
So i am supposed to take your word for it because one games use it own physics from the cpu? and ignore the on growing list of games that utilize physx, do you need me too link you to the nvidia physx page?

Dont get me wrong it is great that devs utilize the power of quad core cpus, it was about time! BUT for massive parallel calculations such as physics the GPU will always do better. BTW did you know nvidia offered ati physx support but ati refused to pay royalties i guess...


Conclusion

If you are in for the BEST performance for the money and mutil gpu combo GTX 480 is the way to go.

Because it scales far better than the 5870 and deeply outperforms it, it performs very close to quad 5970 (which scales terribly and cost 400$ more) and in some cases it even performs slight better with 8/16xAA added

Dont forget that as we all know fermi has superb dx 11 tessellation support and its performance will only get better as games utilize it more heavily, and drivers mature as they are still in beta stage
 

rubberjohnson

Distinguished
Aug 28, 2006
68
0
18,630
...seems as though there is a function to add a URL to your post but it doesn't work...put a www at the begining of this: hexus.net/content/item.php?item=24061&page=1
 

knowom

Distinguished
Jan 28, 2006
782
0
18,990
[citation][nom]nottheking[/nom]Similarly, extra features are invented by them, such as EyeFinity.[/citation] Yeah a very inventive matrox/SoftTH rip off. Triple monitor setups are niche for a reason it's expensive and unattractive due to bezels.

Really it's nothing, but a temporary solution until monitor makes come up with a way to make bigger and higher resolution monitors more affordable.
 

ryandtw

Distinguished
Apr 20, 2010
2
0
18,510
[citation][nom]dunuck[/nom]CPU bottleneck
That only happened when the THIRD GTX 480 was added, but again we dont think that anyone that is willing to spend 1500$ on GPUs is going to have a problem with that[/citation]

Maybe it's all because it's X58, and the thing is that whenever three GPUs are in operation, it operates in x16/x8/x8 mode - yes, we've got NForce 200 chips that increase PCI Express slots...
 

shinmalothar

Distinguished
Apr 20, 2010
30
0
18,530
In the UK its ~£310-£350 for a 5870, its ~£450 for a GTX480 which is apparently $215 difference according to the exchange rate. Id consider that to be alot of money, none of this $50 nonsense some have been spouting.
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310

"Big trouble?" You act like AMD can't make a refresh itself. There's also a bit of a problem with relying on a die shrink... 40nm is still pretty new, and that's what Fermi's based on. As I'd mentioned in prior GF100 articles, we're looking at a MINIMUM of 6 months before a die shrink can happen. So with the card just freshly out, we're stuck waiting until November, if not later. Given that the main flagship 5870 came out in September, that's already a 6-7 month gap; by the time a refresh'd Fermi comes out, it'll have to face against at LEAST a refreshed RV800, if not RV900, which is still implied to be due out by the end of this year.

In that case, I have my doubts that AMD would have much to fear from a die-shrunk GF100; who'd care about a 512-core GF100 running at a higher clock rate, when AMD will have their own 5890/5990 in return, or a 3,200-SP 6870 and 6,400-SP 6970? And it's not like nVidia can release a refresh of real consequence BEFORE the die shrink, since yields are so abysmally low, and they're ALREADY having high clock rate misses.

To be honest, nVidia's problem is that they should've scrapped Fermi months ago and come up with a whole new design. Instead, they wound up painting themselves into a corner here.


That's actually a flawed set of analogies. For a lot of the comments on prices here, it's more like going "I know I can afford it, but why would I spend $1,000,000US on an Enzo Ferrari, when I can spend $650k on an SSC Ultimate Aero, which is faster?" Of course, Ferari has more fans, in spite of their car not being the fastest.


First off, it's not a true rip-off; EyeFinity is not just "more than 2 monitors a card," as that's something nVidia has done with their Quadro NVS cards for years. Rather, it's the prospect of arbitrarily mapping monitors to different parts of the same display, specifically for gaming; while Matrox was able to do this for simple desktop tasks for years, they really never touched gaming with this. This is truly something great.

To be honest, it's INCREDIBLY doubtful that huge, many-megapixel displays will ever be "affordable," or even really practical. Screen prices increase logarithmically as they scale up in size; this is because the panels have to be monolithic, and they suffer from size-related yield problems in a fashion not entirely unlike GPU dies. This is further compounded by the issue of market size, which affects costs through economy of scale; even 2560x1600 monitors are bought relatively often, while 1920x1200/1080 ones are very common, making them far cheaper, even in sets of three, than what even a modest 3840x2160 monitor could ever hope to be.

The other part is that it'd still use a bunch of monitor cables; Even DisplayPort 1.2, the bleeding edge of monitor connection technology, is limited to around 17.2 gigabits per second. If you're working at a 60Hz refresh rate and 24-bit color, that means you have a cap of ~12 megapixels per cable. That means that while EyeFinity 3 has been shown to hit well over 12 megapixels, the same with a single monitor would STILL require a complex multi-cable solution, making sure said monitor remained expensive.
 
First, I don't know what's fair about comparing essentially four cards (5970) to three cards (SLI). Second, for 99% of people out there, two-way SLI and two single cards in Crossfire offers the best bang for the buck and value for all but the most insane applications that 99% of us can't afford (like three 30" 2560x1600 or six 24" 1920x1080 monitor setups).

Third and most important, the GTX 480 in SLI beats the 5870 in Crossfire in every single one of these benchmarks at both resolution settings, by a combined average (including 3D Mark) of 12% at 1920x1080 and 19% at 2560x1600.

Now considering the GTX 480 will cost $50-$100 more each than the 5870 (with some exceptions like the ASUS 5870 EYEFINITY 6/6S at $550) soon-to-be consumers like me will have to see each game benchmark individually to make that choice between the two cards. That is, when nVidia ever ramps up production enough for us to buy one.
 
Status
Not open for further replies.