I didnt say what it would compete with, but its my firm belief that the 65nm chip will be next. It may well not turn up for months.
Like I said how is this different for the always planned 65nm R600 refresh? That's my point, sort of a non-issue.
Fab process is one factor, it is not the only factor. R600 uses a larger die,
Larger die size would actually be beter for heat issues (more surface area to disipate heat) and thus would lower resistance and give you better power numbers. So it's die size is actually counter to your argument if anything at all, for just it's own sake. What a lerger die usuall means is the next item which is the most important part...
There is the most important factor.
Just as clock speed is only a good indication of performace between the same mArch, process technology only reduces power consumption when compared to the same mArch.
And only if it doesn't cause it's own problem. going from 130nm Low-Kd to 110nm in the 800XT->X800XL did not help as much as expected. The lack of Low-Kd insulation didn't help, and there seems to be some issues with the increase leakage/noise of the shrink. The X800XL was nowhere near the benefit of what they expected so it doesn't even always work on the same mArchitecture.
Yup, it would. G70 and G71 are the same core. Cores these days are not designed at transistor level, computer programs put together the low level pieces, and there are often inefficiencies in the design. They managed to shave 30m (approx 10%) of the transistors off G70, and there is no reason they should not achieve something similar with G80 during a die shrink or core revision.
Actually the shrink didn't cause less transistors (then it wouldn't be a shrink it would be an error if it's randomly dropping transistor) it was their redesign that had a huge impact on transistor count. The switch to the full 90nm from the 110nm half-node would not alone remove transistors. An while shirinks usually results in benefits, they also increase things like cross-talk and leakage, etc. So it's hard to say which alone had the greatest impact, although it's certain that the transistors alone at least had some impact because all of the factors of the prcoess would end up multiplied by the transistors. So even if they didn't excise a bad portion of the design that leaked like crazy or something, and just removed something nearly useless (like FX12 support), the impact would be felt as a factor of all else.
A smaller process also causes less power consumption, but sometimes this reduction is offset by too much of a clock speed increase. Still, G70 to G71 went from 430MHz to 650MHz with a 110nm to 90nm shrink, (a 51% clock increase) and still significantly reduced its power consumption.
It didn't reduce it's power consumption WHILE increasing clocks, you're overstating the effect, not significantly less, although at least you could've argued that the diff is due to 256MB of memory. Most test show the same as Xbit's detailed amounts;
Now the GTs increased by 50mhz and used slightly less power, that would be a better example of the illustration, but once again functional pipeline reduction plus the additional 30mil trans diff which is a bigger chunk with less active pipes.
Right now the GF8800 uses quite alot of power at idle & 2D and the interesting thing will be to see what the R600 does at idle & peak 2D. I doubt there will be much difference, but once we start talking about cards using more at idle than the GF7900GT uses at load, then it becomes a concern for using this PC for surfing/music playing, and something like SETI (F@H of course benefits from the VPUs so not the same).
I dont think the 65nm shrink will be here any time soon, and it may well be competing with R600 65nm, but when it does arrive I do believe the power consumption will be less than G80 90nm.
Maybe / Hopefully, but a few things to consider, first they'll likely crank the 65nm part alot, maybe leave some headroom, but it depends on who comes out first IMO, look at the GF7900 once again as an example, they could've saved much more power at 500mhz, but felt they needed something at 600+ in order to compete with the part at the time)), second, there will be increased memory properties that will be negative, 512bit and 1GB size (but slight power saving from GDDR4 vs GDDR3)
I'm hoping that barring a possible 8850GX2, we have seen the most power hungry GPUs ever in R600 and G80, and everything else will improve from here.
Well technically the GX2 or Gemini would still be just 2 single GPUs, so likely the G80refresh and sastest R600 on 80nm will be the two most power hungry VPUs, GX2 would just mean a more powerful card of course. So the way things lok, the G80 will be the most power hungry 90nm, and the R600 will be the most power hungry 80nm.
It is said that R700 will feature multiple smaller GPUs, and hopefully nVidia will do the same, and power wont be as silly as it is now.
I think you're forgeting why they ar moving to multiple smaller GPUs, not for power savings, and what we'll end up with is 4 chips at 1/3 of the power of an R600/G80 giving us 150% the total performance let's say, so the power issues will remain, but they may get more efficient per chip because instead of trying to push past the point of diminishing returns, they'll find the core is most efficient here, and what wee ned to do is have 6 of them on a card instead of trying to OC the 4 cores 50% yet increasing power consumption by 70%.
We'll see if that happens or not, it would be the wise thing to do in some aspects, but it introduces more issues like package size, incterconnect I/O issues, etc. What such a thing would do is give overclockers insane options. Imagine them shipping 2 GF7900 cores on a single board (not the GX2) at the ~400mhz of the GF7800, then you have access to insane OC potential, but also with likely large Power issues. The question will be whether the amperage supplied to the cards would be sufficient to handle this, that may end up being a restriction that would hard-cap the max without having to do bios tricks and such.
We'll see, but I don't think that it's something that will concern this generation, and the figures being bandied about for the R600 are getting blown out of proportion with people still talking about 300W while the power connector layout doesn't match those numbers without the leap of faith to the PCIe2.0 being REQUIRED for the R600.
I expect the R600 to consume more than the R600 but some of this stuff is getting ridiculous, and most of it IMO is being fostered by the PSU industry.