GTX480 / GTX470 Reviews and Discussion

Page 13 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Oh yes there were. You should've seen all the, only 10% more comments with big questions marks followed by pass.

Thanks for you concern I appreciate it, but I will just take it back if it doesn't meet my fancy. My friend will just plump it in a refurb and sell it.

The only thing that upsets me, is I dont want ppl's hate for a company get involved. The 285 refreshes were laughed at, yet they still pulled through. We saw ppl that had 280s say no point, yet later had 285s in their tag.

So this is why i'm saying to see what happens. Users dont have these cards yet, and seeing some from BFG EVGA XFX etc should help us make better judgement.

I'll have a review on 1, and post my opinions. I will not compare it to my 5870 as I dont care enough for the first review, I will make review as though I have no idea what other cards do. I'll do the same with the 5870 and post the results individually.

 


1. You answered you own question right there, its not a software optimization, its hardware. It was more or less excepted well before the launch that the Fermi architecture would have an advantage with tessellation. Why has that suddenly changed? Unlike the R8xx architecture, Fermi is a new ground-up architecture made for DX11 and for extreme processing speed in a GPGPU environment, it is not surprising that it does better with tessellation than the modified architecture that the 5xxx series is running with. It is to ATI's credit that they did not have to design a ground-up architecture to produce the performance that the 5xxx does.

2. You also answered yourself there, anyone who says that nVidia's driver team is better than ATI's rather than just focused differently, let alone wholly superior, is talking BS. Both ATI and nVidia are equally matched in this department, give me one example of a bad ATI driver since 2008 and I'll give you at least one I've personally experience form nVidia.

3. Yes, all cards need at least a month to really mature. This is absolutely true and I've said it forever, I've been saying it and you are right some people here did criticize ATI for this previously, and they were wrong.
 



I wouldn't Troll like that 😉 Dont stoop down to that. This has been a very healthy conversation and I'm liking it. Ppl are actually speaking with back up. Lets not change that.

All cards are dealt.
 


I don't doubt it will improve, understand, I kinda of expect this to be an architecture that relies heavily on optimizzed drivers to make the most of the polymorph engine, but that reminds me very much of the FX5900 vs R9800 battle where they would have to wait until they could test properly with the latest drivers. I don't doubt it will improve, but that improvement is not an excuse for current performance, so the criticism of the GTX480's performance in BFBC2 is equally as valid as the praise for the GTX480's overall 5-15% performance superiority. However I'm not about to pretend that they aren't the YinYang of each other and must be fully respected from both ends.

And even you made a mistake, the longest they could have had a working Fermi architecture card is about 3 months since the A2 spin from what I've read. They surely couldn't have had the specifications finalized by then so it would have done the driver team little good.

You don't need final clocks to optimize the drivers, you need finalized hardware (which A# -> A# indicates no dramatic changes), you might get shader or register stalls and such, but the design remains the same and clocks would only influence performance, not optimization of the drivers and the utility of the design.

They were able to start optimizing drivers before they had silicon in emulation, then when they had silicon in September, they could start applying that theory to hardware, then do the first stage of bug hunting, then A2, second phase of bug hunting with working hardware which would be slow, but exactly the same as the final design. And then A3 in January ould give them the next GPU to work with. They've had these chips for long enough to do the basics, but even still, it doesn't even negate the issue that it's a performance problem, for whatever the reason it's the first exception to show that the GTX480 isn't globally better than it's competition, and that you will see variety in performance. So instead of simply ignoring and sloughing off the situations where it doesn't 'destroy the competition' figure out why, and if this is something where each game will need to have optimized drivers for something as complex as Fermi. And with so many HD5K series cards out there, will the driver optimization requirement versus 'built for nVidia' games shift the balance of the previous DX10 generation titles?

I'm not saying there won't be driver improvements coming, but I am saying that those BFBC2 numbers are just as valid as any others out there regardless of the reason behind their short-comings, nor the hope (but not proof) that they will improve.

Anywhoo, for now, despite what it sounds like by my negativity toward the theorizing, I'm not making any final judgements until I can see a much wider spread of tests on more and more 'unusual' situations, and mroe than just the PR-approved launch testing this past weekend.
 


Wow, wow... wow...

Where were you when the GTX 295 came out? I wasn't here, granted, but I know that the GTX 295 was looked down upon by most non-biased rational people.

perfrel.gif


The GTX 295 was one of the biggest jokes ever, a 5% performance increase for $100+ more than the 4870 X2 at first, which went down to about $60-80 depending on the deal and went back up to $100+ right before the 5xxx series launch. Only those who didn't know better and fanboys bought the GTX 295 over the 4870 X2, 5% isn't worth $20 even let alone 3-7 times that!
 


Very much agreed, I'm happy to finally be having a good discussion.
 
Am I the only one that saw the GTX 480 running at the same temps at the 4870X2, a Dual GPU chip running on an older 55nm TSMC process with a terrible refernce cooler, that quiteter and at a low fan speed?

That took my breath away for a second, I didn't think it was that bad.
 
nm tech doesn't always mean cooler running you know. Nv's 55 nms were nearly double the size of Ati's

the 480 is also running Ref cooler btw. They are the same Strength, though the 480 seems to have better min frames. I dont see this as a problem.

The XFX 5870 I have runs 87 on load. I dont think 6 degrees will make some1 scream. Running Sli with the 480s didn't change the temp. So I think its fine.
 


Did you think we missed you spamming the same thing 4 posts , 6 minutes earlier ?
 


I think there was a bit of a miss-communication here. I was never making excuses for the performance in BCBC2. My point is that there is a range of performance that you need to expect to gain when you see a release day review. This is most important when the competition has aged mature drivers, and a GPU that is 6 months old is at least having it's mid life crisis.

My entire point is that expecting the GTX 4xx series to stay the same against the 5xxx series is ludicrous. It is perfectly reasonable, logical, and extremely likely that we will see a 5-10% performance increase almost across the board against the 5xxx series.

You could very well be right about the drivers with Fermi, but nVidia has the extra money to put people on it so I'm not worried about that. Besides they have to, they are basically married to the Fermi architecture, or at least a variation of it, for at least another series (GTX 5xx for example).
 
Other Reviews shows that GTX 480 SLI is ~ the 5970 performance, this will be fixed with a Driver Update, Anand. obviously didn't fake the bench. not all websites use the same hardware also another fact, some uses 4Ghz some 3.6, some 3.8.

4The Cr4zy peeps 3-Way SLI GTX480 --> http://www.maingearforums.com/entry.php?23-So-You-Want-To-Buy-A-GeForce

Sadly bob is saying the truth lol (Heat&TDP) but that doesn't mean it doesn't perform great.
 

Negative, you have sites that puts fermi ahead, you also have sites that put fermi equally as fast and then you have Anand 😉

AFAIK everyone who tested Fermi did it with the same driver, correct? Then the outcome should have been much closer on all tests, YES there will be tests that differ against the others because of hardware/clock differences but putting fermi on par with the 5830 is just nonsense.

I am waiting on the 470, I will do my OWN bench (BC2) @ 3.8Ghz (1680x 1050) with the exact same settings that Anand posted. Then I will laugh at the fact that they were wayyy off and show you why I think they are biased, Fair?






 


The 4870 X2 has a die size of 2x 256mm or 512mm, the 5870 has a die size of 334 and the GTX 480 and 470 have a die size of 529. The 4870 X2 has about the same size die combined as the GTX 480/470, but the two 256mm dies are not together and are separate. The heat output of a die is multiplied by the size, not added, because of the properties of heat and surface area. This means that the GTX 480 having a die 1.58 times larger than the 5870 does not mean that it produces 1.58 times the heat, but actually a bit more.
 


http://www.hexus.net/content/item.php?item=24024&page=6
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/30321-nvidia-geforce-gtx-470-review-14.html
http://www.guru3d.com/article/geforce-gtx-470-480-review/16

All these reviews show basically the same pattern, though I love how even when the 5850 beats the GTX 470 with average FPS, the GTX 470 beats it just as heavily with minimum FPS in a few of those tables, which was shown in the Anandtech article too.
 


Actually it's not the hardware alone, look at the improvements of the drivers for both the GTX480 and the HD5K, they both saw dramatic improvements with optimized drivers (which even nVidia mentioned at their demo of the original benchmark). The polymorph engine needs direction and that's where the software comes in, it's like ATi optimizing their setup/dispatcher for their design. There is alot of pre-staging going on to achieve efficiency.

2. You also answered yourself there, anyone who says that nVidia's driver team is better than ATI's rather than just focused differently, let alone wholly superior, is talking BS. Both ATI and nVidia are equally matched in this department, give me one example of a bad ATI driver since 2008 and I'll give you at least one I've personally experience form nVidia.

I'm specifically saying they are equal in that respect and it is for that reason that in a title they've had equal time to prepare and optimize for (either in beta or in final) that using one as a driver excuse and not the other is ridiculous. The game just came out at the beginning of the month, no release experience for either company, and the same motivation/time to work on performance. Rememebr the 10.3a drivers are older than the BFBC2 launch, so I say that pointing driver fingers at that one, versus something where one company has had the title to themselves ( *cough* Batman AA *cough* Lost Planet *cough* ) is a little different.

3. Yes, all cards need at least a month to really mature. This is absolutely true and I've said it forever, I've been saying it and you are right some people here did criticize ATI for this previously, and they were wrong.

This is my point about this for both though, they both get that month +, but also if the card has a performance hole like that in the PR launch titles, then what does it say about games that will never get that driver team attention? It's was an issue for the FX5900 and the HD2900 where if you're not a AAA title you may have sucky performance for a while, while you wait for a driver update.
 
Anads using a Intel Core i7-920 @ 3.33GHz compared to others using 3.6-4Ghz That could prolly limit the GTX 480 SLI Setup.

Yep, but it still doesnt beat the 5870 min and the margin is ~5fps @ 1920, and gets identical at higher resolutions.
 


I have a hard time thinking nVidia will neglect the lesser known games, both ATI and nVidia are much larger companies than nVidia was at the time, though nVidia was dominating with market share. Also, drivers have been getting much better as time goes by with both companies.

And lets be honest, you don't think the GTX 470/480 performing crappy in old games or lesser known titles will remain a secret for long? This simply isn't an option for nVidia or ATI. nVidia has to make this architecture work for a good bit longer...