SSD 102: The Ins And Outs Of Solid State Storage

Status
Not open for further replies.

Lewis57

Distinguished
Nov 27, 2009
198
0
18,680
A very good article. I love these articles explaining everything. I'm planning on buying two OCZ Vertex 2E 60GB for RAID-0 when I get enough money. Can't wait, should be one hell of an upgrade from a single 5400rpm WD green drive.
 

JoeSchmuck

Distinguished
Feb 7, 2007
12
0
18,510
From what I understand, TRIM is supported under IDE mode using Win7 as well so you do not need AHCI. I have a Samsung’s VBM19C1Q firmware device and running IDE mode.
 
G

Guest

Guest
Earlier this year we deployed a 5 node failover cluster with iSCSI backend. Each of the VM Host servers utilize a pair of solid state drives for booting and operating, with VM's running off of iSCSI shared cluster volumes. The servers are unbelievably fast and stable - 6 months of 100% uptime on Windows 2008R2. We only use magnetic HDD's now for transporting backups off site.
 
G

Guest

Guest
One thing that I'm very curious, if we follow Tomshardware's advice to turn off disk defragmentation, the files on SSD would be defragmented over time.

Upon SSD data loss, can we recover the data files if it's defragmented, especially on a SSD that has never been defragmented as Tomshardware had recommended?
 

randomizer

Champion
Moderator
Defragmentation of an SSD is not entirely unnecessary. It's important to distinguish between file fragmentation and free space fragmentation. The former is not an issue with SSDs because all parts of an SSD can be read at the same rate (the same is true for writing if the blocks are clean). But fragmentation of free space, whereby free space is largely distributed across partially-filled blocks, can severely reduce the performance of an SSD. Any time a file of <512kB is written to an SSD, it will take up only part of a block. However, the SSD will eventually run out of clean blocks and will need to re-arrange the data by erasing partially-filled blocks and consolidating them to free up more blocks for further writing. Running a free space defragmentation on the drive will aggressively consolidate the data on-demand so that you don't have the problem occurring when you didn't plan for it.

Most SSDs will perform this process themselves when idle for extended periods, but it happens at a slow rate. This is what most manufacturers refer to when they talk about Garbage Collection.
 

Alvin Smith

Distinguished
Please send me the four fastest 256GB SSDs on the market, so that I might perform my own comparison ... I'll just sit by the door and wait for UPS to arrive.

Thanks, in advance !!

= Alvin =
 

gordonaus

Distinguished
Aug 20, 2010
38
0
18,530
I put an SSD in my new computer and it was good but after i got the firmware update and changed to AHCI it was AMAZING (OZC Vertex 2 60gb). I would say tho that 60 gb is not enough, i installed windows photoshop and a few other design programs and i only have 20GB left
 

compton

Distinguished
Aug 30, 2010
197
0
18,680
Great article. I'm happy to see it in the mix. I'm sticking with my Intel x25-V and OCZ Agility 60 for a little while, but who knows what the future will bring.
 
G

Guest

Guest
It is much cheaper to buy 2 Hitachi 7200K disks which are quite reliable and compact RAID enclosure like Jou Jye. And you can have same SSD performance with 1TB drive for the price of one 256GB SSD. I have to mention that I can have up to 2,5Gbps max transfer rate which is not far from SATA II limit.
I am using same configuration on desktop. What I have noticed is that performance is actually much better than I expected. That is probably because of cache memory. If you have drives with big cache then in RAID stripe configuration those caches logically combine. In case of good desktop drive you can easily have 64MB cache. BTW I looked at the SSD drives caches - wow I know where performance comes from. :) Actually not from SSD technology as such.

I think SSD is overrated right now. They have to be 4x cheaper. Otherwise it makes no sense. Next year they will be 2x cheaper and after one more year they will 2x more cheaper. So actually technology still needs two years to be usable.

My recommendation: stick to SATA and RAID - save the money. If you need little storage and max comfort then use SSD.
 

Keeper

Distinguished
Sep 24, 2008
10
0
18,510
dvdeo,

You save a lot of money with SSDs, simply because their watt consumption is really low. So, in long term (say 1y) you will be saving enough money to probably buy those Hitachi 7200K for free.

Energy efficiency is the key factor with SSDs.
 

JoeSchmuck

Distinguished
Feb 7, 2007
12
0
18,510
I understand data reliability for SSD cells that may have reached thier maximum writing number, that the cells can still be read making the data at least available which is much better than a mechanical hard drive since when they fail, ut's usually not good.
 

bitterman0

Distinguished
Sep 28, 2009
10
0
18,510
[citation][nom]Keeper[/nom]... So, in long term (say 1y) you will be saving enough money to probably buy those Hitachi 7200K for free...[/citation]
The power consumption difference of a single drive is negligible for the purposes of generating any tangible savings on the electric bill. Let's assume the average power consumption difference between HDD and SSD is 5W, and the system that employs the drive is up 24/7/365. Also, let's assume that your electricity cost is 14 cents per kWh (that's what I'm paying on average, your mileage may vary). Thus 0.005kW * 24h * 365d * $0.14 = $6.132 - that's your annual savings (to be clear, that's six dollars and some change, not six thousand). Surely, if you employ hundreds upon hundreds of drives, the savings will add up, but in the end the up-front investment into SSD's higher cost is not likely to pay off within the SSD lifetime, not to mention to get you any savings.

On a separate note, I do believe that longevity of drives is one of the major factors that affects the purchase decision. For enterprise use, if the drive is constantly hammered by writes (say, a database file is stored on it), the rate of wearing out re-writable flash is likely to be higher than the rate of failure of magnetic drives (certain 10K RPM IDE drives notwithstanding).

... if only SSD were more affordable! But, perhaps, the rumored adoption of 2Xnm technology for NAND by Intel by the end of this year will finally put enough pressure on the market to bring down prices to the realm of affordability. One can only hope.
 

doorspawn

Distinguished
Feb 10, 2010
173
0
18,680
Can someone shed some light on a query I'm sure many of us have here:

Why is the block size so large?
What makes a 4KB or even 256B block a bad idea?
Is it there's a large per-block component that can't be shrunk?
Is it that blocks need to be insulated from each other so that high-voltage instructions (perhaps clear) don't leak?
Those are purely guesses.
 
G

Guest

Guest
Good overview article, error on last graph:
5.5 watts to 1.7 watts is not "1/3 Reduced" as per label - it is "2/3 reduced" or "Reduced to a 1/3"
 
G

Guest

Guest
What is the point of a RAM-drive for a swap file? ... Just use all of the RAM as RAM, and turn swap off.
 
G

Guest

Guest
No mention to the SSD's life expectancy? Max writes per cell and such.
 

waxdart

Distinguished
May 11, 2007
199
0
18,690
[citation][nom]doorspawn[/nom]Can someone shed some light on a query I'm sure many of us have here:Why is the block size so large?What makes a 4KB or even 256B block a bad idea?Is it there's a large per-block component that can't be shrunk?Is it that blocks need to be insulated from each other so that high-voltage instructions (perhaps clear) don't leak?Those are purely guesses.[/citation]

Have a read of this.
http://www.tomshardware.co.uk/4k-sector_size-advanced_format,review-32012.html
 

doorspawn

Distinguished
Feb 10, 2010
173
0
18,680
[citation][nom]waxdart[/nom]Have a read of this.http://www.tomshardware.co.uk/4k-s [...] 32012.html[/citation]

Thanks.
Hmm - perhaps I should be specifically asking about erase block sizes. Block sizes are ~4k yet erase-blocks seem to be 128k to 512k. Only the former is explained by ECC optimization, as erase-blocks don't have additional ECC AFAIK.
 
Status
Not open for further replies.