[citation][nom]zingam[/nom]I don't get it! If you have 4870 then you should probably have powerful CPU to. Why do you need an acceleration then?[/citation]So you're saying that producing a 40nm wafer of 1.5 billion transistor chips will produce the same number of chips (working or otherwise), as a 40nm wafer of 3 billion transistor chips? Using the same wafer size and manufacturer? The answer is no. Yes, they will be binning the GT300 - all chips are binned, my friend. But the only thing binning helps with is poor yield %, it doesn't change the actual cost to make the damned things. It allows them to sell partially-broken or underperforming parts. But this will only scale down so far.
Look no further than the GT200 for an example. The chips can scale UP as yields improve, but scaling DOWN is impossible unless you like losing money. Did they ever sell a GT200 below the 260? No. They are too expensive to sell in the cheap sector, so they used a rebadged 9800 GTX+ (G92b) and named it GTS 250 rather than selling GT200 chips that were even more gimped than the GTX 260. G92 is smaller, and therefore a hell of a lot cheaper to produce on the same process. Everything GTS 250 and below is G92 currently.
They also have trouble getting heat and power consumption issues under control with large chips. Again, take the GT200 as an example. Now look at the laptop "GTX 260M" and "GTX 280M". Guess what? They're not GT200 either! That's right, the mobile "GTX260/280" chips are G92, again.
Conclusion: the huge GT200 die was just not appropriate for the mobile and cheap markets, at 55nm. At 40nm, the huge GT300 will have problems scaling for these same markets. So for Nvidia's sake, I hope they release a GT2xx (upgraded GT200, maybe with DX 10.1 and CS 4.1), to cover these important markets.