G200b? Higher shader clock?

maximiza

Distinguished
Mar 12, 2007
838
3
19,015
If they do go to DDR5 or DDR6 memory it would be just a little slower or even with a hd4870x2. But with allot less heat. I think we will find out very soon. Nvidia really got their panties in a knot with the ATI 4000 series. They have been burning the mid night oil. Of course it still won't be DX10.1 .
 
Well GDDR6 isn't even ratified yet, so no way, GDDR5, as mentioned in the article, not based on the bandwidth # provided (was unlikely anyways).

However essentially a 200 Mhz (15%) boost for the shaders will help for sure, but how much depends on the apps. The truly shader limited stuff is still going to have a slight theoretical gulf between the two designs where the GTX280 is still below a single HD4870, although now just ahead of the HD4850; but overall I think this will help in games like Crysis and COD, however compared to the HD4870X2 it's still got a very loong way to go to match that on paper, and I think because it hasn't improved the memory, ROP and Texture situation that really the GTX280 will outperform where it currently significantly outperforms and will still underperform equally in the inverse situation, the area it will impact the most is in those where the two solutions are close, and there it will give the GTX a small bump.
 
So, overall, you dont see this as being a great bump in performance, just heavily shaded games. If the core clocks were increased by say, 15%, would that be enough, plus going to the old 2.5x on shader clocks to significantly lift the G280+, or whatever itll be called, over the x2? In most games? That 15% core bump would include 15% over current shader speeds plus then going to 2.5x
 
I think that increasing the core speed as well (say 700/1750) would help, but it's not like it would be an enormous leap, it would basically equate to probably a 10% increase in frame rates on average with the lower stress apps/games still being system bound, but the higher apps which may have been held back might get a 30-40% boost, but it would be something like going from 12fps to 16fps or 20 to 27 fps. It's definitely noteworthy, but it's not going to be the same as some of the dramatic bumps in SLi, etc. But at the same time a jump from 14fps to 20fps takes a game from , slightly slow and choppy to relatively smooth.

IMO I'd say the the GTX would definitely get better overal boost from a core and shader boost as we outlined, but if they kept the shader ratio the same and just OC'ed the core to 700Mhz (thus giving us ~1500Mhz which is the same as the current 602Mhz x 2.5) then of course the boost overall would be bigger, but I don't think the faster texture and ROP portions would contribute much since they are not a major limitation compared to the HD4K design.

If you had a choice between a 650MHZ Core x 2.15 shader clock where the shaders were lower ration but the whole chip was increased in speed versus just changing the ratios, I too would pick the shaders as the thing to focus on as it's the weak spot in the G200 design.

And of course it would be great to do both, but we're talking in a thermally limited world where you're forced to focus on your increase and what get's the trageted 55nm benefits.

Also I think it's likely that the GTX280 still see the major clock increase and the GTX260 see a focus on yield improvement.
 
OH definitely it would overclock better because the GTX IMO would be pushed closer to it's limit looking for max publishable performance, whereas the GTX260 would give good performance, but you would set the 'stock' clocks at the level that maximizes your yield/sales ratio.

I'm sure the GTX260 will OC well, but I would think their focus for the bas GTX260+ will be yields, then overclockers and companies like BFG, eVGA, etc will boost the heck out of them from there.
 

terror112

Distinguished
Dec 25, 2006
484
0
18,780
I would expect nvidia to put more emphasis on their more midrange GTX260 cards than the 280, as thats where ATI is hurting them most, the wallet. They may say they have the fastest single GPU card, but what matters to most "educated" buyers is the price/perf ratio. So IMO what Nvidia should do is load up the clocks just enough to make it on par with the 4870, and keep it running cool with a low power consumtion to make it look more attractive than ATI's offerings. The question is if they can achieve this with a simple jump to 55nm. I am doubtful at the most.
 
Yeah, all that GDDR3 vs 512 GDDR5 may be a wash in costs, but board layout and the 512 bit bus ( I know its lower, but fused off anyways) is very costly, even at same node, plus its still much larger, even if they get good yields, die size eats up alot of room
 
Yeah right now ATi is getting 2.25 times more dies per wafer, and supposedly ontop of that more of those are viable as percentage than the G200, which means it's likely costing 2.5+ times as much per die to produce. The shrink to 55nm will improve the die per wafer number, but they need to improve yield of viable chips too in order to catch up.