Unlocking the defective or deactivated cores on the Furby cards could cause problems.
1 - You'd need to hack or use a hacked bios
2 - If the cores are defective, the card would fail until re-done.
3 - If working, the card will also generate more heat and may have more power issues.
The issues with the GF 480 are long known. Too hot, not much performance. The loss of 32cores allows Nvidia to have actual products to sell.
Someone posted, why is Nvidia having problems since ATI's chips are made in the same factories.
TSMC manufacture the chips based on the designs that ATI, NVidia and other customers hand over to them. Remember the failure of Nvidia's G92/G94 chips in notebooks and graphics cards (discrete video cards have better cooling and thermal control) which meant lawsuits have been flying for the past 2 years?
GeForce 480 / 470 ("GTX" is pointless. Are there any GTX220 or GT285?) is different from ATI by:
1 - experince, ATI's 4770 showed ATI how to better handle 40nm design & manufacture.
2 - GF 480 Dies are HUGE. They didn't learn from the GF100 chips(whatever) that were used in the GeForce 260~295s... but then again, GPU design takes about 3 years! The ATI 4800s kicked Nvidia in the nuts... it wasn't quite as fast in some games but it was a WHOLE lot cheaper.
If GPU A is twice the size ^ over GPU B, that means 4 GPU Bs can fit into a single GPU A. The making of the actual chips are on the same size manufacturing processes. So it costs the same to make either design A or B on a wafer, but the difference is that design B gets 4 chips out of every 1 from design A.
3 - Size again, Wafers are typically 300mm across (going by memory). Any chips on the edges are useless. Smaller the squares, the more you'll get near the edges.
Simple example: If a typical wafer has 15 defects. It means bigger chips such as those in the GF 480 are rendered useless or put in a lower bin. Lets say CHIP A (GF 480) is 529mm^2 vs 5870 at 334mm^2 chip B... almost double the size!
The Fermi/GF100 can fit up to 94 CPUs per wafer... but 100% yields isn't possible (Edges and defects)... Nvidia maybe getting 23~25 workable GPUs per wafer (This doesn't mean 23~25 GF-480s... just a working chip. So some of these could be binned for a GF-460. AMD is able to get about 4x as many dies per wafer and the defects don't hurt as much. AMD is able to get a lot more GPUs out per wafer, perhaps close to 100 5870 GPUs. (5770/5750 GPUs = 3x as many)
Keep in mind that each wafer costs about $5000 to make.
Here is a good drawing of what GF-100s would look like on a wafer.
http://www.brightsideofnews.com/print/2010/1/21/nvidia-gf100-fermi-silicon-cost-analysis.aspx
4 - Bigger DIE (CHIP) = more heat... more power requirements. A single GF 480 will use about the same power as a dual-GPU ATI 5970... ouch. With all the heat and power issues, over-clocking this baby is going to be minimal... and problematic.
5 - Scaling down the GF100/FERMI... not looking so good either. So it would be a long time before Nvidia has $50 DX11 parts. Yeah, I know - they are weak.
The G92/94/G215 will be in production a long time. Yes, Nvidia did an excellent job with the G92/8800GT~9800GTX~gts250... the card that keeps on going.