It's a shame... I'll give nVidia credit for making what looks like one of the most wicked cooler designs I've EVER seen on a video card; possibly the best. (I've been a bit partial to the 7800GTX 512, I'll admit, as impractical as it was)
However, it looks like, as predicted, it's gonna be a bit of a bust; the 480 looks like paying almost the price of a 5970, with its TDP, for... Barely more than a 5870. Given that 40nm JUST became available for those outside of Intel, we're looking at the 480 being "that's all, folks" from the green team until probably early 2011. (possibly as early as this November)
All told, it seems an incredibly bad miss. Which actually is pretty bad, since it looks like AMD won't have to cut prices at all; even at their already above-MSRP states on NewEgg, they're still besting Nvidia on price/performance by a good margin, AND they're making a killing, while Nvidia undoubtedly will barely have a profit margin with these numbers.
So everyone say hello to half a year of market stagnation!
[citation][nom]liquidsnake718[/nom]I wonder why it was never followed up for the 9xxx series.......[/citation]
That's because in order to have a wider interface, you need more pins. And to connect more pins to a chip... You need more space along the chip to attach them to. In other words, there's a minimum GPU die size for each memory width; about 200mm² is needed for a 256-bit memory interface, and EVERY GPU with a 384-bit interface or higher has been at least 400mm². Hence, the original G80 was a massive 484mm², and could easily fit a 384-bit interface. but with the G92's die shrink, the size cut down to only 324mm², too small to actually fit anything larger than a 256-bit interface.
This is the same reason RV670, RV770, and RV870 all have 256-bit interfaces; if even tiny chips could have wide ones, heck, every card would be 512-bit, limited only by how many RAM chips they could afford/cram onto the board. But in truth, memory interface width is only one factor in memory bandwidth; one can compensate for a narrower interface with higher-clock memory, and likewise, compensate for low-clock memory with a wider interface. Hence why nVidia never used GDDR4 and was late to the GDDR5 party, as they had wider interfaces.