AMD Pushes R600 Back To May

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The retail R600 will likely be within about 50W of the 8800GTX, and likely closer to the 8900GTX. So a similarly competant PC P&C will be more than fine for either solution.

The 8900GTX will be 65nm, and likely use far less power than the 8800GTX, just as was the case with the 7800GTX->7900GTX
 
The retail R600 will likely be within about 50W of the 8800GTX, and likely closer to the 8900GTX. So a similarly competant PC P&C will be more than fine for either solution.

Neither will need more than a solid 600W PSU unless Crossfire is used or a honkin CPU setup wth multi drive raid array, etc start factoring in an drawing too much on 12V. The other thing to consider is ensuring the rails can handle enough on their own which is another reason why quality current PSUs are good, olders PSUs spread the burden too evenly and didn't supply enough power on a single 12V rail.

However rememebr the hotter a system runs inside the case the less efficient the PSU as it warms up. Good airflow helps alot.

According to what I've read, the R600 will require between 240 and 300 watts. The 8800 GTX requires what 175 watts. I would be safe and buy a higher wattage PSU that you actually need because you've got to think about the future unless you want to upgrade your PSU every time you upgrade your computer. I remember when a PSU upgrade would last for 3 or 4 computer upgrades. Now, the way things are going, it's more like 2 upgrades per PSU upgrade. So if you can afford it, I'd buy a 1K or 850 watt PSU. Remember, if you bought a 1K PSU, it doesn't mean that you'll be using 1K, it just gives you the leeway to expand. You don't want to push your system too much because if you don't have enough juice, you could actually permanently damage your components. It's better to be safe than sorry!
 
The retail R600 will likely be within about 50W of the 8800GTX, and likely closer to the 8900GTX. So a similarly competant PC P&C will be more than fine for either solution.

The 8900GTX will be 65nm, and likely use far less power than the 8800GTX, just as was the case with the 7800GTX->7900GTX

Do you have a source for this info? I'd REALLY like to know because if it's true, that would make me very happy! I don't think that will be the case though because it usually takes a while to shrink the die down to the next stage and it was difficult for ATi to go down to 80nm. It makes sense Nvidia would have a hard time with it to, but then to drop down to 65nm? I'd be very impressed if they did that. I hope it's true but I doubt it unfortunately. I'd believe 65nm with their Fall card, but not the Spring refresh.

Nvidia (at least as of late) has been playing it a little safe....but that's a good thing because AMD/ATi hasn't which is why they are usually late with with their products. The R600 should have been released in the Fall (and as it stands right now, that might just happen). Basically ATi's trying to release a card a whole generation or half a gen ahead of Nvidia. If the gamble had worked, it would have been huge, but this is twice now that I think ATi has tried and failed to do this. When I say failed, I mean fail to release the card on time. They tried this with the R520 and failed and now with the R600.

I believe it's a failure because I think the 8900 GTX will be just as fast or faster than the R600 and that means that ATi lost a product cycle while Nvidia got many of ATi's high end customers waiting for the R600. I'm one of the people that wanted to get an R600 and now I'm going to get an 8900 GTX because of ATi's extreme tardiness and they ridiculous power draw the R600 needs. If you went by power usage, then the R600 should be around twice as fast as the 8800 and I just don't think it will be.
 
The 8900GTX will be 65nm, and likely use far less power than the 8800GTX,

The GF8900GTX will only be 65nm if it's the G90, which means it won't be here until Xmas shopping season not something near the R600 launch, and then we'll be considering the R600 65nm refresh too.
So depend on which rumour you wanna go with.

And if it was just about fab process then why isn't the 80nm R600 less powr hungry than the 90nm GF80 by your argument?

just as was the case with the 7800GTX->7900GTX

That wouldn't also have anything to do with the ~30million fewer transistors in the GF7900 that weren't being switched on and off would it?
 
According to what I've read, the R600 will require between 240 and 300 watts.

Yeah I've been reading that 300W figure alot recently as well, and from pretty much the same sources saying the G80 would be 300W as well. :roll:

OEM R600 is said to be in the 240W range and the retail R600 to be in the 220W range, of course like the G80 we wont' know for sure until it launches and gets real world benchmarked..

The 8800 GTX requires what 175 watts.

185W is the official number from nV; when approaching maximum useage.

The recent Stream demo for the two R600s running in Xfire say 1 TeraFlop
(which equates to 800mhz the suposed stock target) was that they were running at around 200W each. So what figure do you use? I'll use the 220W and 185W and gives you a plus or minus 5W for my statement.

I would be safe and buy a higher wattage PSU that you actually need because you've got to think about the future unless you want to upgrade your PSU every time you upgrade your computer.

Which doesn't make any sense. BUY the PSU when you NEED to get a new one and then put headroom in for the next build, don't buy one now hoping to need or not need one 2-3 builds from now. If you have to buy now there's no different consideration for the G80 or R600, if you're buying for the future why would you be within a +/- 50W range, and not get the 1KW or 800W PC P&C PSU. If you have to build for the future, then neither the G80 nor the R600 are your targets, it's the 3 year from now Octo-Core Quad-SLi/Xfire Asymetric physics rig that you're going to be looking at. On second thought you might wanna get the one with the built-in Flux capacitor. :twisted:

It's better to be safe than sorry!

Sure, but that nothing to do with the G80 or R600, unless you're building for just this rig, and that's ridiculous considering the past trends. A Minimum 600W quality PSU would be necessary for any single GPU multi-CPU rig for the futures. And then for anything SLi/Xfire, then the 810W PC P&C PSU starts looking really attractive.

BTW, it was easy for PSUs to handle the load in the past they had wattage to spare on crap technology/quality, now we're at the realm where quality matters alot, and buying Generic PSUs is like intentionally killing your PC.
 
The 8900GTX will be 65nm, and likely use far less power than the 8800GTX,

The GF8900GTX will only be 65nm if it's the G90, which means it won't be here until Xmas shopping season not something near the R600 launch, and then we'll be considering the R600 65nm refresh too.
So depend on which rumour you wanna go with.

And if it was just about fab process then why isn't the 80nm R600 less powr hungry than the 90nm GF80 by your argument?

just as was the case with the 7800GTX->7900GTX

That wouldn't also have anything to do with the ~30million fewer transistors in the GF7900 that weren't being switched on and off would it?
This is all good to know. The only hard part is holding on until late this year. Your info is always right, even the fake r600,
 
LOL! Ah yes the photchopped previous gen chip (R420 IIRC) to look like an R600 die, yeah that was a funny one. :mrgreen:

The main thing to remember is that if you feel the need to wait to wait for the games more than the hardware, we can always wait for hardware, but the best strategy most of us go by is to get the best card to play the games you want to play, at some time you've gotta pull the trigger and get hardware, so eithe do it at the beginning of the new hardware like the GF8800 launch like RobX2 or when the games come along you want to play and then you can decide if the GF8800GTS, X2900, GF8600 or X2600 are the best fits for your needs and budget. Those decisions always did well by me.

Think about what people who bought and X800 to play the latest SplinterCell or R6Vegas are thinking, or those who bought an FX5900 to play HL2, etc. Best to wait for the games if that's your focus.

Unfortunately laptops don't have that flexibility until Lasso or something similar arrives. :?
 
A wise man once told me, "Don't skimp on your power supply. It's the heart of your computer." Unfortunately, I didn't heed him and bought a rather generic 550W power supply. It is possible my power supply will do fine with a slightly higher load (due to video card upgrade), but I'll have to cross my fingers. I can always suck it up and get a better one if it doesn't make the grade.

For now though I see value in waiting for nvidia to switch to an 80nm process, which I've heard will reduce its power consumption (and size?). Why suck up additional power when you don't have to? Another thing I've noticed is a large difference between the performance of the 8800GTS vs. 8800GTX in reviews. I wonder if the main difference is the 96 shaders vs. the 128 shaders, in which case the 96 shaders might become a bottleneck in the near future and perhaps the 8900GTS will have 128? I don't hesitate to point out that my knowledge is very limited in the realm of video cards.

At this point I'm not sure how long I'll have to wait though - with the R600 delays and rumors that there might not even be an 8900 series of cards. Gotta be strong! (or rich)
 
Well supposedly nV will skip 80nm when going to 65nm not bothering with the half-node skip. This should mean more increase in efficiency than going from 90nm , but also the difference between 80nm and 65nm is more than 90nm and 80nm, and offer better options since it's not just an optical shrink. This will also affect AMD's next transition which is from 80nm to 65nm, and both should fine major benefits IMO of course from the GF8 series it should be a huge impact, but also it depends on what the new GF90 is, is it really just a shrink of the intended refresh or truely new design like the G90 was supposed to be. Both options are possible based on previous nV activities, and IMO alot of that depends on what the R600 is when it comes out. Both options could simply be waiting for the green light, if the R600 underwhelms go with the shrink of the refresh, if the R600 impresses forget the refresh and simply skip to the G90.

As for the GTX vs GTS, remember that there's also memory bitwidth and speed involved.

IMO, if you're concerned for any reason about buying a graphics card, then the GF800GTS-320 is the way to go, it gives you the middle ground with less chance to lose value in the short term should you decide to resell and upgrade. Also it's a competent card with all the nice features you'd expect in the future games that the GTX will have, you just may have to push back the resolution or AA a bit.

The best time to buy the GTX IMO was when it first came out, now the GTS-320 is my personal fav recommendation. But there are compelling reasons to do a whole bunch of things, including wait, buy a lesser card for physics potential, or look into the eVGA step-up program.

The problem is right now there's lotsa options, but most people including nV are playing the waiting game until the R600 comes out. If you need a card now then there's really not a good reason to wait, however if you're fine with your current card, or don't game enough until new games come out, then waiting makes sense until the time when you do feel you need a new solution.

Just my 2 frames' worth as always.
 
I personally would not be able to settle for the 320MB version of the 8800GTS, especially considering the 640MB version can be found for $50-60 more. I see the value in more texture memory from an artist's standpoint, so I think many games might start catering their higher graphics settings for people with 512MB+ cards via more variety in textures and higher resolution textures. It would be a shame to be bottlenecked this way when the rest of the card will probably manage high quality settings for some time to come.
 
I personally would not be able to settle for the 320MB version of the 8800GTS, especially considering the 640MB version can be found for $50-60 more. I see the value in more texture memory from an artist's standpoint, so I think many games might start catering their higher graphics settings for people with 512MB+ cards via more variety in textures and higher resolution textures. It would be a shame to be bottlenecked this way when the rest of the card will probably manage high quality settings for some time to come.
In the "old" days, a 256 meg board is more than enough. Now I see 640mb barely enough for 1900x1200+.
For 2560x1500, 1 gig might not sound so radicle as not long ago.
 
The 8900GTX will be 65nm, and likely use far less power than the 8800GTX,

The GF8900GTX will only be 65nm if it's the G90, which means it won't be here until Xmas shopping season not something near the R600 launch, and then we'll be considering the R600 65nm refresh too.
So depend on which rumour you wanna go with.

I didnt say what it would compete with, but its my firm belief that the 65nm chip will be next. It may well not turn up for months.

And if it was just about fab process then why isn't the 80nm R600 less powr hungry than the 90nm GF80 by your argument?

Fab process is one factor, it is not the only factor. R600 uses a larger die, with more transistors. Just as clock speed is only a good indication of performace between the same mArch, process technology only reduces power consumption when compared to the same mArch.

just as was the case with the 7800GTX->7900GTX

That wouldn't also have anything to do with the ~30million fewer transistors in the GF7900 that weren't being switched on and off would it?

Yup, it would. G70 and G71 are the same core. Cores these days are not designed at transistor level, computer programs put together the low level pieces, and there are often inefficiencies in the design. They managed to shave 30m (approx 10%) of the transistors off G70, and there is no reason they should not achieve something similar with G80 during a die shrink or core revision.

A smaller process also causes less power consumption, but sometimes this reduction is offset by too much of a clock speed increase. Still, G70 to G71 went from 430MHz to 650MHz with a 110nm to 90nm shrink, (a 51% clock increase) and still significantly reduced its power consumption.


I dont think the 65nm shrink will be here any time soon, and it may well be competing with R600 65nm, but when it does arrive I do believe the power consumption will be less than G80 90nm.

I'm hoping that barring a possible 8850GX2, we have seen the most power hungry GPUs ever in R600 and G80, and everything else will improve from here.

It is said that R700 will feature multiple smaller GPUs, and hopefully nVidia will do the same, and power wont be as silly as it is now.
 
I didnt say what it would compete with, but its my firm belief that the 65nm chip will be next. It may well not turn up for months.

Like I said how is this different for the always planned 65nm R600 refresh? That's my point, sort of a non-issue.

Fab process is one factor, it is not the only factor. R600 uses a larger die,

Larger die size would actually be beter for heat issues (more surface area to disipate heat) and thus would lower resistance and give you better power numbers. So it's die size is actually counter to your argument if anything at all, for just it's own sake. What a lerger die usuall means is the next item which is the most important part...

with more transistors.

There is the most important factor.

Just as clock speed is only a good indication of performace between the same mArch, process technology only reduces power consumption when compared to the same mArch.

And only if it doesn't cause it's own problem. going from 130nm Low-Kd to 110nm in the 800XT->X800XL did not help as much as expected. The lack of Low-Kd insulation didn't help, and there seems to be some issues with the increase leakage/noise of the shrink. The X800XL was nowhere near the benefit of what they expected so it doesn't even always work on the same mArchitecture.

Yup, it would. G70 and G71 are the same core. Cores these days are not designed at transistor level, computer programs put together the low level pieces, and there are often inefficiencies in the design. They managed to shave 30m (approx 10%) of the transistors off G70, and there is no reason they should not achieve something similar with G80 during a die shrink or core revision.

Actually the shrink didn't cause less transistors (then it wouldn't be a shrink it would be an error if it's randomly dropping transistor) it was their redesign that had a huge impact on transistor count. The switch to the full 90nm from the 110nm half-node would not alone remove transistors. An while shirinks usually results in benefits, they also increase things like cross-talk and leakage, etc. So it's hard to say which alone had the greatest impact, although it's certain that the transistors alone at least had some impact because all of the factors of the prcoess would end up multiplied by the transistors. So even if they didn't excise a bad portion of the design that leaked like crazy or something, and just removed something nearly useless (like FX12 support), the impact would be felt as a factor of all else.

A smaller process also causes less power consumption, but sometimes this reduction is offset by too much of a clock speed increase. Still, G70 to G71 went from 430MHz to 650MHz with a 110nm to 90nm shrink, (a 51% clock increase) and still significantly reduced its power consumption.

It didn't reduce it's power consumption WHILE increasing clocks, you're overstating the effect, not significantly less, although at least you could've argued that the diff is due to 256MB of memory. Most test show the same as Xbit's detailed amounts;

power.gif


Now the GTs increased by 50mhz and used slightly less power, that would be a better example of the illustration, but once again functional pipeline reduction plus the additional 30mil trans diff which is a bigger chunk with less active pipes.

Right now the GF8800 uses quite alot of power at idle & 2D and the interesting thing will be to see what the R600 does at idle & peak 2D. I doubt there will be much difference, but once we start talking about cards using more at idle than the GF7900GT uses at load, then it becomes a concern for using this PC for surfing/music playing, and something like SETI (F@H of course benefits from the VPUs so not the same).

I dont think the 65nm shrink will be here any time soon, and it may well be competing with R600 65nm, but when it does arrive I do believe the power consumption will be less than G80 90nm.

Maybe / Hopefully, but a few things to consider, first they'll likely crank the 65nm part alot, maybe leave some headroom, but it depends on who comes out first IMO, look at the GF7900 once again as an example, they could've saved much more power at 500mhz, but felt they needed something at 600+ in order to compete with the part at the time)), second, there will be increased memory properties that will be negative, 512bit and 1GB size (but slight power saving from GDDR4 vs GDDR3)

I'm hoping that barring a possible 8850GX2, we have seen the most power hungry GPUs ever in R600 and G80, and everything else will improve from here.

Well technically the GX2 or Gemini would still be just 2 single GPUs, so likely the G80refresh and sastest R600 on 80nm will be the two most power hungry VPUs, GX2 would just mean a more powerful card of course. So the way things lok, the G80 will be the most power hungry 90nm, and the R600 will be the most power hungry 80nm.

It is said that R700 will feature multiple smaller GPUs, and hopefully nVidia will do the same, and power wont be as silly as it is now.

I think you're forgeting why they ar moving to multiple smaller GPUs, not for power savings, and what we'll end up with is 4 chips at 1/3 of the power of an R600/G80 giving us 150% the total performance let's say, so the power issues will remain, but they may get more efficient per chip because instead of trying to push past the point of diminishing returns, they'll find the core is most efficient here, and what wee ned to do is have 6 of them on a card instead of trying to OC the 4 cores 50% yet increasing power consumption by 70%.

We'll see if that happens or not, it would be the wise thing to do in some aspects, but it introduces more issues like package size, incterconnect I/O issues, etc. What such a thing would do is give overclockers insane options. Imagine them shipping 2 GF7900 cores on a single board (not the GX2) at the ~400mhz of the GF7800, then you have access to insane OC potential, but also with likely large Power issues. The question will be whether the amperage supplied to the cards would be sufficient to handle this, that may end up being a restriction that would hard-cap the max without having to do bios tricks and such.

We'll see, but I don't think that it's something that will concern this generation, and the figures being bandied about for the R600 are getting blown out of proportion with people still talking about 300W while the power connector layout doesn't match those numbers without the leap of faith to the PCIe2.0 being REQUIRED for the R600.

I expect the R600 to consume more than the R600 but some of this stuff is getting ridiculous, and most of it IMO is being fostered by the PSU industry.
 
I'm not saying G80 is in any way better than R600 (except in as much as I can actually get my paws on it) and while I'd like a 100watt PC, if R600 is out anytime soon and more than about 10% more powerful, I'll buy one even if it uses 1,000 Watts.

I was expecting to see a power decrease on the die shrink, both from G80 65nm and R600 65nm, but you are right, this could be offset by pushing the clocks too high on the refresh.

I stand corrected on the 7900GTX/7800GTX power requirements. I must have had the 7800GTX 512 power reqs in mind when I was thinking of this, which reduces the clockspeed delta between the two significantly :/ silly me.

I know the die shrink itself didn't cause the reduced transistor count, but a die shrink invariably means new masks and therefore, if they have been working on a core redesign/optimisation, a die shrink is the logical and cost effective time to introduce it, perhaps I should have phrased myself more clearly :)

You are correct that a larger die is easier to cool, as the W/mm² is usually lower, but both ATI and nVidia use TSMC's process and fabs, and given that ATI is using 80nm for R600 and nVidia is using 90nm for G80, the larger die of the R600 is indicative of a significantly higher transistor count. You are correct that the larger die size itself isnt leading to more heat, but the die is larger because it holds more transistors. Still, even though a larger die is easier to cool, we were talking about power consumption, not ease of cooling 😛

Finally, on the subject of the GX2, I have said GPU where I meant to say Gfx card, you are of course correct that the induvidual cores of an "8850GX2" would not use more power. Indeed, they would likely be clocked lower and possibly have a narrower memory interface, and use less power per GPU. The total power is getting silly however, especially when people will try to run Quad SLi with those things :/

My thoughts on R700 are that when you have multiple small dies, you have a higher yield (as a defect requires you to discard less mm² of silicon away, and you have less wastage on the edges of the wafer), and therefore the dies themselves have a lower $/mm² of silicon. You can therefore afford to reduce the clock of each die by say 20%, which will reduce the power consumption by say 50%, and just have more dies. At the moment if you reduce the clock of the card to control power, you detrace from the overall performance (of course) but when the archetecture is scaleable between small cores, you can simply add another :) This also further improves yields, as you are lowering the clockspeed needed for a die to pass testing.

Therefore, I'm hoping the GPU companies will listen and give us less power consuming cards. If they follow the above they might even give us GPUs with lots of overclocking headroom to play with and increase power consumption through the roof again 😀
 
while I'd like a 100watt PC, if R600 is out anytime soon and more than about 10% more powerful, I'll buy one even if it uses 1,000 Watts.

I think that's the sentiment of most people in that market. Power consumption concerns IMO are for the mid-range and low-end set.

You are correct that the larger die size itself isnt leading to more heat, but the die is larger because it holds more transistors. Still, even though a larger die is easier to cool, we were talking about power consumption, not ease of cooling 😛

Yeah, but I'm considering it from the view that increased heat increases resistance and leakage and requires more power and becomes somewhat of a vicious cycles after a point. That was the only reason for including the heat issue.

My thoughts on R700 are that when you have multiple small dies, you have a higher yield (as a defect requires you to discard less mm² of silicon away, and you have less wastage on the edges of the wafer), and therefore the dies themselves have a lower $/mm² of silicon.

Yep, that's always been my opinion on the future of multi-vpus versus single large ones.

You can therefore afford to reduce the clock of each die by say 20%, which will reduce the power consumption by say 50%, and just have more dies. At the moment if you reduce the clock of the card to control power, you detrace from the overall performance (of course) but when the archetecture is scaleable between small cores, you can simply add another :) This also further improves yields, as you are lowering the clockspeed needed for a die to pass testing.

I agree, but I still wonder about I/O issues, there's no way that the communication between 4 (32 shader) cores will communicate as well as a single core with 128 shaders that communicate within the die.

Therefore, I'm hoping the GPU companies will listen and give us less power consuming cards. If they follow the above they might even give us GPUs with lots of overclocking headroom to play with and increase power consumption through the roof again 😀

Yeah like I said in the last post, I think that solves the production, and power consumption issues, but imagine the PCB design to support these multi-cores. Think of trying to equal a GTX with multiple GF7900s on a single card, I think the I/O between discrete cores becomes and issue and may introduce some interesting communication overheads.

We'll have to see; Intel went from 2 cores as separate dies on a single package, and then to 2 core in 1 die on the package, to then 4 cores in 2 dies on a single package while AMD does 4 cores in a single die. So that's kinda the opposite of what we're talking about here. Intel still isn't going to single die yet, but that may cause issue with shared cache as well.

Anyways 2 dies together is not much of a problem, however try getting 4 dies to achieve the same efficiency without a shared region like dual dies, and I think you run into problems. Remember VPUs doesn't have the same core concerns as CPUs since they are already multi-units that are massively parallel.

I can't wait to see the R700 and nV equivalent because anthing above 2 cores IMO will require new techniques for multi-core similat to IBM's POWER series, like the POWER5, which I mentioned a while back may be the future of such things, essentially multi-dies on a huge package.
 
Ultimately multi-GPU's will be just as efficient as single core GPU's, but first-gen GPU's won't be I think. In fact we really won't see the benefit of multi-core cards until 2nd gen comes along and the dev's optimize software for them. I hope that doesn't take too long. I wonder though, if eventually all software will be optimized only for multi-core.....for GPU's and CPU's. That is I wonder if single-core CPU's and GPU's will be a thing of the past and even the cheapest CPU's will be dual core at least? I think I could see that one day in the not too distant future.
 
Those are good points and they are also the reason that I disagree with grapeape when it comes to the 320mb 8800GTS.

No offense Grape but you always seem to hype up the 320mb 8800GTS and truth is 320mb of vram is not going to be enough to handle the textures very well in some of the near future releases.

No worries mang, different views of the unknown.
I don't see the future being that of huge textures compared to efficient code that does a better job with less. DX10 is all about being able to code less for more results. Where the resolution of the base mesh can be minimal and then when calculated look better than a DX9 texture map of 4-8 times the size. the question is will this happen before or after the 'MEGA TEXTURES' that Carmack et al are looking for instead?

Also it depends on how the G80 handles DX10 code as well, are procedural and dependant ops going to hog memory, what about context switching? Some things could pose a problem, and it's unknown what's the greatest impact, also what is the implication of physics, etc.

My position is not a stalwart position in that the 320MB is better than the 640, but the difference in price is not as minimal as Cigaboo thinks, it's large enough ($90 for the same quality eVGA, or $70 for an MSI now on NewEgg);

http://www.newegg.com/Product/Product.asp?Item=N82E16814130082
http://www.newegg.com/Product/Product.asp?Item=N82E16814127225
http://www.newegg.com/Product/Product.asp?Item=N82E16814130071

That's not an insignificant amount, and IMO, I wouldn't buy a non eVGA in this time frame (especially since it is also the cheapest 320MB model too) since you get the benefit of an upgrade should you so desire the change once these new addition of DX10 games hit (FSX will be testing within the next 90days according to their blog), and it makes it more compelling.

The other consideration is resolution, and under current DX9 games the GTS-320 keeps pace with the 640 until you either crank the AA @ 19x12 or go well above it without (even AA is more efficient now and the effect at lower res is much less) the difference are minor until much further into the powe useage and that to me is an area that will scale back with future games, where it's the core holding the game back not memory. And no game uses that much larger textures right now, BF and D3 use just over 256's comfort zone, which is why you have the options differences 256 v 512, but it doesn't use anywhere near the 512, IIRC even D3 on Ultra mode was under 300MB, and BF2 was just over 256 (like 280) when under the worst case scenario.

I don't disagree that there are benefits to the 640MB, I just disagree with paying 30% more for only 5-10% more performance at best case scenario when most of the people are going to be playing at 16x12 or 16x0, and won't see much difference.

Now dont get me wrong, any game right now played at 1280X1024 is just about perfect for the 320mb 8800GTS but that is all about to change.

I don't doubt it'll change, but I have a completely different view of how it will change. We'll see, like I said, I don't know, but by the same token, I also don't think the best idea is to spend more on a 640MB GTS, when the best option when it matters might be to upgrade anyways. My strategy is that people buy the GTS-320 and then when needed if they need more power likely they need alot more power. I doubt that memory size will be the limiting factor, and the impact will be reduced when we aren't talking about 8XAA and 26x16 resolutions, but I could be wrong. It really depends on what features people decide to exploit in the games.
 
while I'd like a 100watt PC, if R600 is out anytime soon and more than about 10% more powerful, I'll buy one even if it uses 1,000 Watts.

I think that's the sentiment of most people in that market. Power consumption concerns IMO are for the mid-range and low-end set.

I think it's a bit like buying a car. When you see seductive features and performance, it's easy to jump to the next price point and get the dream machine, even if the cost and gas mileage are outside of the specs you had previously set in stone. Performofeature Lust sets it's teeth and you're hooked.
 
R600 with 1024 MB has lower priority

Comes in mid May
ATI decided to focus its May launch on the 512 MB DDR 3 cards rather than 1024 MB GDDR4. This is the key message from the new revamped R600 launch schedule.
We don’t know what went wrong with the GDDR 4 1024 MB card but it caused an additional delay. We are sure that 1024 GDDR4 memory at 2200 MHz is definitely not cheap stuff.
R600 512 MB GDDR3 version is now the launch version and its availability is expected at launch time, in early May. As ATI had history with shipping schedule nothing is fixed until the fat lady sings.
R600 DDR4 1024 MB cards are now scheduled for middle of May with sample available in early May. End users will be able to grab some by middle of May.
We are talking about 102-B007 card codenamed Dragonshead2 as this GDDR 4 1024 is a delayed one. Some partners will have it but DAAMIT it’s not the primary goal to have this one first anymore
http://www.fudzilla.com/index.php?option=com_content&task=view&id=118&Itemid=1

R600 512MB GDDR3 version :?
i don't thing is better than 8800GTX 786MB :lol:
 
There are a lot of reasons why I'm upgrading now, first and foremost is that I need something to take my mind off some personal problems I have. The second reason being that my 7600GS has pi$$ed me off. I really am an avid gamer and playing @ 1024x768 on a 19" monitor really hurts my eyes.

To cut a long story short I managed to get a BFG 8800 GTS 640 OC2 at a really great price and couldn't care less now about what comes out, this card really blew my socks off. And to make things even better I don't know what all the fuss about the power and heat was all about, it's not getting above 70C on full load. :) Good job NVIDIA!!
 
To cut a long story short I managed to get a BFG 8800 GTS 640 OC2 at a really great price and couldn't care less now about what comes out, this card really blew my socks off. And to make things even better I don't know what all the fuss about the power and heat was all about, it's not getting above 70C on full load. :) Good job NVIDIA!!

Yep I'd say that regardless of what comes out later, if you can afford to get a GF8800 and enjoy it, that's all that matters. For some time to come they will be very compitent cards. The GF7600GS was unfortunately a too crippled version of one of the best cards of the previous generation the GF7600GT, which likely would've been still useable on you resolution. But the GF8800 should take you to late 2008 early 2009 IMO, of course as long as you aren't looking for cutting edge or ultra res or AA, so much as solid gameplay.
 
To cut a long story short I managed to get a BFG 8800 GTS 640 OC2 at a really great price and couldn't care less now about what comes out, this card really blew my socks off. And to make things even better I don't know what all the fuss about the power and heat was all about, it's not getting above 70C on full load. :) Good job NVIDIA!!

Yep I'd say that regardless of what comes out later, if you can afford to get a GF8800 and enjoy it, that's all that matters. For some time to come they will be very compitent cards. The GF7600GS was unfortunately a too crippled version of one of the best cards of the previous generation the GF7600GT, which likely would've been still useable on you resolution. But the GF8800 should take you to late 2008 early 2009 IMO, of course as long as you aren't looking for cutting edge or ultra res or AA, so much as solid gameplay.

I dont know, with a bit of vCore, 7600GS is a good overclocker and has no cut pipes.

The RAM will always hold it back, its similat to Nv6600 vs Nv6600GT

But yes, by the time R600 comes out, I'll have had about 7 months of my 8800GTX. By my upgrade cycle, thats about time for me to upgrade anyway, so it will be a choice between 2nd 8800GTX, R600, and 8900GTX.

I have no regrets about buying my G80 :)
 
I'm afraid my 7600GS was crippled in more than one way. It was one of those silent types with DDR2 memory, so it really sucked. I bought it cheap cause I was under the impression like many of you that the R600 was soon coming out, but I was duped like the rest. Come May I would've already enjoyed 2 months of high-level gaming, and nobody can be sure that R600 won't be delayed again, and even if it comes out, whether all it's problems have been fixed. Like somebody said before me, the only good reason that comes to mind that swings in AMD's favour is that they will bring out a chipset to go with it, but even that doesn't make much sense because it implies that the R600 will be somewhat incompatible with other chipsets, doesn't it?
 
I'm afraid my 7600GS was crippled in more than one way. It was one of those silent types with DDR2 memory, so it really sucked. I bought it cheap cause I was under the impression like many of you that the R600 was soon coming out, but I was duped like the rest. Come May I would've already enjoyed 2 months of high-level gaming, and nobody can be sure that R600 won't be delayed again, and even if it comes out, whether all it's problems have been fixed. Like somebody said before me, the only good reason that comes to mind that swings in AMD's favour is that they will bring out a chipset to go with it, but even that doesn't make much sense because it implies that the R600 will be somewhat incompatible with other chipsets, doesn't it?

I would suggest you do what many people are doing including myself....buy an eVGA GF 8800 GTX, then use eVGA's Step Up program and upgrade to a GF 8900 GTX when it's released. You only have to pay for shipping and the difference between the 8800 and the 8900 and that's it! The only catch is that the 8900 GTX has to come out within 90 days of your purchase of the 8800 GTX.

The beauty of all this is is that the 8900 GTX is likely to be faster than the R600, plus it will use much less wattage than the R600...around 175 watts compared to 300 watts, if the rumors are true...at the very least 240 watts though. The worse case scenario is that the R600 is faster than the 8900 GTX, but then so what, even then it will likely only be around 5 to 10% faster. You've got to look at the way ATi is looking at this. They delayed the R600 because they were disappointed in it's performance against the G80. They said in some games, the G80 actually outperformed the R600. I think that's very amusing because the R600 is supposed to be this super card. Then add to the fact that the first R600 will have GDDR3 memory with 512MB of RAM. The 8900 will have 768MB of GDDR4 RAM (correct me if I'm wrong...I'm not certain on this but I thought that's what it will have). Then the 1024MB variant with GDDR4 won't come out until mid-May. What do ya wanna bet, they do delay it at least once more. I'm betting the end of July for the final date for the 1GB version instead of mid to late May. That will be ATi's final insult to everyone....delaying the card (trying to be a 3DR here) then it's not quite as powerful as they would have you believe. Say what you will about Nvidia...they learned from there mistakes and they've been rock solid, ever since the NV40!