[citation][nom]rosen380[/nom]Likewise though, if it detects a failue, it could just move the data to an unused cell [or a cell marked for deletion]...[/citation]
The problem here is that the drive (and the OS attached to it) thinks it has 128GB of usable storage space. If, say, 8GB have gone bad, it would really only have 120GB of storage space. The drive would need some way of telling the OS that, hey, I have 8GB of bad sectors, don't try to write more than 120GB of data. There's no such mechanism, and instead, when the OS writes data and there are only bad sectors left, the drive can't do anything and that data is lost and the drive corrupted. Remember, flash memory re-writes whole sectors at a time, so you might lose the middle sector of a file, and better hope it's not your actual file system that's lost.
This is the case REGARDLESS of how much memory is provisioned on the drive. If you have a 128GB drive, provision 64GB for backup, that leaves 64GB for data. IF 70GB of the sectors go bad, the drive is now only at 58GB of actual capacity, but still thinks it's at 64. The OS writes 58.1GB of data and boom, drive is essentially dead.
Intel's patent doesn't fix that problem. What it does is it lets the customer decide how important data preservation is to their particular application. Average customers might only need 10% provision. Flash memory can sustain thousands of writes, and what home user is going to write 128GB thousands of times to a single drive? However, corporate scenarios (especially databases) might write out to a drive millions of times a day. In this case, provisioning a drive at 20%, 30%, 50% might be more cost effective than replacing the drive after only 10% has failed.