This is taken from a post on the thread I referenced by a guy from Australia named canthearu. I think he has hit the nail on its head. (By the way, OCZ acknowledged all of this "controversy" two weeks ago, so Tom's is being a bit disingenuous by making the claim of just finding this out now. Does it really take two weeks to get an article into print?)
**********************************************
I think I understand now why the vertex 4 128gig on firmware 1.4 performs the way it does.
The way OCZ have programmed the Vertex 4 128gig firmware is quite ingenious actually.
For MLC NAND, as discussed in this paper (http://cseweb.ucsd.edu/users/swanson...11PowerCut.pdf), pages are actually programmed twice to store 2 bits per cell. The first time a page is programmed from erased is very fast, as it is going from a known state to a roughly central voltage (or staying unprogrammed). The second time a page is programmed is quite slow, because there is existing data that needs to be conserved, while the programmed state is adjusted by 1/4.
So what the 128gig vertex 4 is doing while more then half the drive is free, is programming only the first layer of each page. This is the performance mode and performs more like SLC NAND rather then a normal MLC drive would, resulting in amazingly high write speeds on the 128gig drive (350meg per second or more). However, once you reach 50% full, the drive no longer can perform any first layer writes, and now must write the much slower second layer, which seems to take up to 4 times as long to do. This is the switch to storage mode is.
During storage mode while the drive is more then half full, the drive will maintain 2 bits per cell, but when garbage collection is performed, it will attempt to free up NAND blocks so future writes can be performed only on the first layer, resulting in pretty good performance for burst writes.
Other drives, along with the Vertex 4 128gig under firmware 1.3, must distribute first and second layer writes evenly, resulting in a more consistent, but quite a lot slower, performance over the full surface of the drive.
The reason why the vertex 4 256, 512g don't really show this behavior is that it isn't needed for high performance ... the controller can interleave enough operations that, at least for the 512gig drive, it can distribute writes to the first and second layer evenly without a performance impact.
Edit: however, this is just a guess from someone who doesn't have any inside knowledge, so I could be completely wrong.
************************************************
I don't know about you guys, but to me this looked mighty ingenious on OCZ's part.
Todd Sauve