GTX480 / GTX470 Reviews and Discussion

Page 82 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


im such an ati fanboy i own a gtx260-216.

geez, whatever that nerve of yours is made of, its surely lacked humor in it.

maybe what i was implying with my post is that nvidia has another inefficient card up their sleeves which imo not something to cheer about.
 


Well i can give a few guesses.

1. The GTX 470 and GTX 480 have been out for while.
2. Where constantly now talking about GTX 465 and future GTX 485 instead of GTx 470 and 480.
3. We dont need another forums long thread record beating. :lol:

There probably a different reason why random took it off but truth is, this topic cant be sticked forever.
 

True, but we do want. 😗


Great then: fire 'em up.

 


True some shader clusters were disabled either for QC or marketing reasons (like the R9500), it's not like they had to increase the voltage/power to get it to work, it's not 'broken' in that way which would explain increased power to 'fix' it.
Also, as it relates to power consumption your example doesn't matter, because they disable the shader differently and so the power characteristics are not the same versus the GTX480/485 so it doesn't relate in the way you a proposing it should, as if the GTX 480 was a faster version of the GTX485 with power draw from unused parts. The two methods are different, and really the HD is about the same power consuption of the HD5850, only slightly higher (a few watts) versus the much higher draw of the HD 5870 which has much higher clocks for both core and memory.

We don't know how dramatically the GTX485 will react to higher clocks and more shaders, but without a process change it would consume more power, which is one of the main reasons it likely took this long to go through one or two more spins, maybe even a metal layer change, which would explain the delay and the ability to finally produce enough for a SKU. Of course that's also the latest rumour out there too, of course people are saying it "Could be a new chip" but really it's likely just a new spin on the same old chip, or 'Fermi done right'.

Also interesting of note from Fuad is the division of the term Fermi again, leaving it to just the GF100 series saying Mainstream Fermi is canceled so instead we get the GF104, which supports that 'FERMI is ONLY the GF100' position that caused so much problem previously in the thread when talking about the dual CPU cards. Regardless of your personal position on it, it supports RealityRush's view even if that's one camp or another and even if you don't personally put any value in what Fuad says, it still supports that split on the naming convention if people want to be so anal about it. And considering the lack of an actual dual-GPU product or even a finalized GF104 it's surprising anyone thinks they may have solid ground to stand on when attacking him, with the large number showing he struck a nerve that wasn't even supportable by his detractors.

I don't guarantee that the GF104 or revamped GF100/GTX485 will be one thing or another, but it seems pretty logical to say that any unbalance number that involves a base 7 component and unbalanced back-end indicates crippling, and the GTX485 with the same process will consume more power than an equivalents die GTX480, which doesn't mean it will have to be closer to 300W if they have a 'spin-fix' which would help lower the overall power consumption, not somehow just make a GTX485 consume less power, but all the derivatives of the new chip consume less power.

As for a new sticky I'll leave that to someone who thinks the rumour threads are productive here. If I thought they would be conducted like the stickies elsewhere then I'd make one, but there is little chance of that. :pfff:

A simple thread will be enough, just like the Evergreen and Fermi rumour threads which reached many hundreds of posts without being stickied.

A sticky like that here simply attracts too many n00bz who aren't competent enough to search for, let alone discuss, rumour threads. :mmmfff:
 
Heres fudz take on the fermis...There wont be a mainstream fermi alright...the gts250 and the 9800gt will live to fight another day(my guess)..again..
http://www.fudzilla.com/content/view/19233/1/
 


My GF104 News thread which had a few paragraphs of speculation and news didn't get a single post. Mind you I doubt anyone who looked at it didn't know what I was talking about.
 

Well post some news ! lol ?

There is a new review at Hardware Canucks, (site is down right now) retested the gtx 470 against the 5850.
Gtx 470, walking tall !
 
Yep, they already stated it earlier this year.

TSMC just F'ed up way to much with this last 40nm generation.

AMD can still fall back to TSMC, but it's not attractive if they can get the KHMG fab working, so IBM or SAMSUNG would be a better fall-back since they share process similarities unless it's that process that has issues, which doesn't seem to be the case from the test wafers that came out last week.
 
I don't understand what you mean?

Which other company? Samsung? Process wise should be similar, but one may get a better situation from other considerations that might make one better than the other.
Like when ATi spit their production between TSMC and Chartered, you didn't notice a performance difference based on the fab but based on the other card factors more than GPU factors. Did anyone notice the difference between TSMC & IBM made NV FX cards?

Also batches of chips at the same fab are different, let alone at different fabs or by different OEMs whose voltage regulation can vary now (usually for the better).

However as I mentioned Samsung and IBM are using a similar set of standards and platform so as a general process it should be close to similar baring QC issues.

TSMC is a bit of a question mark considering their original statements in the fall of 2008 versus their statements last summer, versus the 'all quiet' recently, and that was after rumbles about their 28nm HP process and HKMG, so who knows. They have their 28nm LP as well, but that's not really geared for high-end GPUs, usually discrete and entry-low end. nV has publically said they aren't going to GloFo, but they could still go with someone like Samsung, and essentially have the same process as GloFo, but if they are with TSMC it could give different performance better/worse depending on who does a better job.

It looks good for new chips quicker (although 32nm production would've provided something this fall/winter rather than next summer) but it still needs to be clean unlike TSMC's 40nm and 80nm HP which had a ton of promise and borked first ATi then nV.

These new chips may or may not 'overclock well', however they should allow a similar design to run faster, like an Evergreen or Fermi running 1.2 Ghz is quite possible, or a much bigger chip (transistor wise not area) running 1 Ghz, and depending on what they push they may also consume less power especially when idle and when regions are 'off' thanks to the HKMG... that is... if all goes somewhat well and not like 80/40nm TSMC.
 
I meant would the performance or behavior of the chip change due to a switch in companies (TSMC -> GloFo). But I guess this answer it; "Also batches of chips at the same fab are different, let alone at different fabs or by different OEMs whose voltage regulation can vary now (usually for the better)."

 
Yeah the vairability from GloFo to Samsung would be less, but from TSMC to the others it would be more since the IBM/AMD method is gate-first, and TSMC is gate-last, and even right now it's unsure if TSMC can get their HP HKMG production out.
I suspect they will be close, but one will be shown to have less leakage, and that will mean better chance at stable OCs... usually as long as there aren't other issues (like the bonding ones in previous TSMC).
Who that'll be we can't tell until the dies are cut and popped onto cards for testing.

EDIT: Also remember nV will be on TSMC and ATi on GloFo, so it may not be the process which is at issue for OCing it might be the design. Look at NV30 on IBM great speeds from the die, but still other issues. TSMC's 110nm X800XL, good die/chip potential ruined by power constraints placed on it to keep it from needing extra power connectors, etc.

But should be fun watching either way. :sol:
 
Yes..I think this'll be one of the more interesting released...will nVidia build on Fermi, cutting it down and optimizing, or will they go with another arch...how will ATI's switch to GloFo and the double-jump for process size work..? All in 2011.
 
I think nV will simply do a Shrink of Fermi (it's already very big and somewhat of a 'balanced' allotment; you may see some filling in of die space to maximize the area available since there will likely be gaps created, so they may add some more features (like eyefinity etc), but it's not likely to be a major redesign, likely something more subtle. Fermi is nV's architecture for the next few years, I don't think they have an alternative even on the drawing board. And it's a good architecture, there's nothing wrong with it from a design perspective if they believe their vision of where the future goes. A shrink will help some of their problems and if TSMC delivers I wouldn't want to change anything other than to simply amplify Fermi, while bringing down power and heat which is a possible benefit of the smaller process (although they may trade it for speed like ATi and nV seem to do alot of).

NI however should be a major change gate density doubles and they can fit the same architecture in 60% space or get about 50% more transistors in the same package (alot still depends on some things that are hard to change like memory interface, etc).

I would expect a near doubling of performance in the traditional DX10 apps, and then seeing benefits in other areas where they make cumbersome activities more efficient. However the uncore is still very 'unknown' as it relates to performance vs implementation. It sounds good and holds alot of promise, but then again so did the R600. Hopefully SI gives us alot of insight into what to expect from NI.

I also expect one thing from BOTH ATi & nV.... MORE CACHE/BUFFERS.

There's a design method using the lower layers as massive cache/memory, which would be fan frickin'-awesome-tastic, but extremely unlike, however would be like providing MBs of eDRAM similar to that found on the R500/Xenos which would be insanely helpful for HPC, GPGPU, Tesselation, and give 4XSSAA+ for free at HD resolutions and make VRAM bit-width matter less thus you can keep the cost of wider bit-width/traces down.
 
Well I'm not sure where would be a good source, I've just been involved in design for a long time, with a ton of friends in the industry (one a former Girlfriend who worked for EA), and come from a family of engineers (and lawyers and pro athletes)..
..but start with the basic from Wiki (although be prepared to forget most of that quickly considering their level of info), and then check places like EEtimes , the IEEE Journal SPECTRUM and other P.Eng journals and such.

PS HKMG stands for High K dielectric Metal Gates, which reduce leakage at the gate. The big development in about 2003 was going to Low-Kd, now it's Low-Kd with High-K gates.

A good bit of Info from intel (essentially the boundary pushing big boy in the Fab world) (be sure to check the papers at the bottom);
http://www.intel.com/technology/silicon/high-k.htm

Interesting format, but less info from GloFo, although goo visualization of how layers work;
http://www.globalfoundries.com/eBooks/hkmg/index.html

two page PDF if the above doesn't open correctly;
http://www.globalfoundries.com/pdf/GF_HKMG_TrifoldBrochure.pdf

I'll see if I can find a good EM view of a gate with the two materials I saw a nice one many months ago , the ones from years ago were for memory not procs.