Flash-Based Hard Drives Cometh

mr_fnord

Distinguished
Dec 20, 2005
207
0
18,680
Due to the nature of SSDs all of their problems should easily be overcome due to two factors:

1. Moore's Law - SSDs currently $10-15/GB, but Moore's Law will solve this one. The cost of raw flash is about $8/GB, and those chips probably aren't of the quality needed for SSDs, so as flash continues to drop at 30%/year so will the cost of SSDs.

2. Parallelism - SSDs will exhibit nearly linear increases in performance with increases in chip speeds and with increases of the number of flash chips. The 2.5" form factor and pricing constraints require that a small number of the densest chips be used, but if the device were scaled to 64GB with twice the chips in a 3.5" form factor, or 4-8x the flash chips in a PCIe or 5.25" form factor, the read and write performance would scale as well.

An aside, in laptops the HD form factor makes sense, but in desktops I wonder if a PCIe card wouldn't work better. PCIe 1x is capable of 2.5Gb/sec, so 4x PCIe would be much less of a bottleneck than SATA. the 3.5" form factor allows for an inch of height, totally unnecessary for a 1/8" thick PCB, but limits the surface area of the device. Even a low profile PCIe card would have larger surface area for greater chip count. A controller that presented itself to the OS as a storage device or RAID controller could be used, and conventional RAID designs would be unnecessary since each chip on the card operates independently of the others, so a 128 chip device should be as fast as 8x 16 chip devices. Assuming the controllers can keep up.
 

Turas

Distinguished
Mar 26, 2007
107
0
18,680
I am just so tired of annoucements regarding these. I want to buy a couple of 64GB ones and put them in a RAID 0 array. There are many manufacturers already producing them but they seem to only be in the OEM channel. They really need to get more out in the channel.


Also about the statement that this is the fastest drive could be false. Here is a link to a site that has some benchmarks showing one sustaining 100MB/s read and 80MB/s write. These drives are at a huge premium though. I would love to see THG get a hold of some to test as I would trust the benchamrks coming from you as opposed to some random site.

http://www.dvnation.com/nand-flash-ssd.html
 

uk_dave

Distinguished
Jan 4, 2007
11
0
18,510
...trying to work out what the conclusion is for a DT gamer... I suppose read performance is by far the most important factor for in-game performance, since I'm guessing there is very little writing to the HDD during play? (Correct me if I'm wrong, i'm just guessing)

If that's the case, I suppose these flash drives are going to have a significant edge even over the faster raptors?

Thanks
 

gwolfman

Distinguished
Jan 31, 2007
782
0
18,980
What about "real world" benchmarks!?! How could you forget that? We all know how synthetic benchmarks can be quite different in "real world" benchmarks. I was excited when I starting reading the article when you mentioned a RAID 0 setup and that the read performance was better than the Samsung. I am disappointed and feel cheated that you didn't bother with the other tests / benchmarks.
 

No1sFanboy

Distinguished
Mar 9, 2006
633
0
18,980
Thank-you, I've been waiting for a review of ssd's in raid 0. I'm still holding out for two 64's at under a thousand cdn$ and I'm jumping; maybe not too much longer.

This is the one upgrade that will make already fast systems feel faster day to day.
 

gwolfman

Distinguished
Jan 31, 2007
782
0
18,980

Lol, nice avatar. I want to get me one of those Ariel Atoms. I saw that episode and ended up downloading that clip cuz that piece of art is as fast as @#$*.
 

d_kuhn

Distinguished
Mar 26, 2002
704
0
18,990
I think the caution against using SSD's on a server should be ammended somewhat. It seems to me that one of the easiest applications to justify their use on is as the OS Drive on a server. I use 15k SCSI drives as OS and swapfile drives on several systems (with large SATA drive arrays for data). I would have been interested to see the performance of SSD's relative to high performance drives (36gb and 72gb 10k sata and 15k scsi). 15k drives are pretty expensive (compared to 7200's), so SSD's are competing on much more favorable turf in that market.

I've never looked into it but I'd suspect it's possible to split much of the write intensive directory structure (User Data, swapfile, database directories, etc) from the basically read-only directories ('OS, Web Server, Application installation directories, etc...). It seems to me that would allow you to have your cake (wicked fast read access times) and eat it too (write heavy directories on normal HDD's).

 

gwolfman

Distinguished
Jan 31, 2007
782
0
18,980

You get my vote. Well put. I'd very much like to see data comparing those different types of drives as well. How does it compare to the Savvio 15K 2.5" drives as well? That could be a close comparison as both can be/are used in web servers.
 

DXRick

Distinguished
Jun 9, 2006
1,320
0
19,360
Maybe they need to add cache to the flash drive to boost write performance?

Or maybe we will see dramatic speed improvements when a company experienced with making hard drives makes a flash one? They can add 8-16 MB cache and know how to write algorithms to manage it.

Sandisk is trying to create a HD that just works like a flash card used in a digital camera. :non:
 

rockyjohn

Distinguished
What would be the best way to use the SSD in a system with consideration for larger data files - some of which you would like to have in a raid 1 setup for data security? Would you have two SSDs in raid 0 for operating systems and applications and two SSD's in raid 1 for data - or all in raid 5? If you wanted to increase data storage size and reduce costs, what would be the performance hit for have the two data drives on HD in raid 1?

I agree it would be nice to have some application benchmarks to see real performance. I admit I am not very knowlegeable in this technology and have difficulty translating the poorer write performance into the impact on applications.

Given that this apparently has the greated potential for increaseing performance, hopefully THG will continue with more reviews covering other areas. And hopefully just moving having the larger HD manufacturers move into this segment with larger production runs will reduce costs more quickly.

:bounce: :bounce: :bounce:
 

choirbass

Distinguished
Dec 14, 2005
1,586
0
19,780
yeah, the ssds should easily offer superior application performance for the majority of applications, simply due to their virtually nonexistant access times... high transfer rates dont really matter much for typical uses, to be honest


however, if your access times were really really really slow, it wouldnt matter how high your STRs were (they could be as high as hundreds of terabytes per second, but youd seemingly never even get to the file to access it, to be able to then transfer it (thats an extreme example, but just there to point out an emphasis, that faster access times are oftentimes more significant than higher STRs are, for many things, though not all)
 

gwolfman

Distinguished
Jan 31, 2007
782
0
18,980

I agree with what you said, especially your extreme example of access times and STRs (sustained transfer rates). On www.anandtech.com they did a review of Super Talent's 16GB SSD ( http://www.anandtech.com/storage/showdoc.aspx?i=2982 ). It topped out at 20.5 MB/s average transfer rate but when you look at the "real wold" benchmarks/comaprisons, it's amazing how it still beats some of the other HDD's, even the Raptor! The SSD in this article has over 3 times the read performance or even 6 times in RAID 0, it'd be interesting to see how that compares!
 

choirbass

Distinguished
Dec 14, 2005
1,586
0
19,780
yep!, thats my point exactly... and is why raptors, even the oldest raptors from about 4 years ago, are still at least as fast as even the fastest current 7200s, when it comes to most everyday tasks and application uses (as hard as that might be for some people to accept)... the primary difference is going to be their access times, which are oftentimes nearly twice as fast as comparable 7200s, and why even 15k scsi hdds are faster even still for most uses, even when their STRs arent, as 15k scsi offer access times nearly twice as fast as raptors do.

im going to jokingly say that ssds are about on par for performance with 50k rpm or 100k rpm scsi hdds, lol, just for some perspective anyhow
 

joex444

Distinguished
@rockyjohn -

RAID0 with SSDs is the only way that makes sense.

RAID1,5,10 all are designed to preserve data in the event of a hard drive crash. With SSDs, there should be no crashing. The MTBF should be several order of magnitudes greater, meaning using RAID5 would be preventing against the statistical equivelant of getting hit by lightning several times.

On the server space, if SSDs were used, RAID5 might make some sense, but a 15 drive RAID5 array would be fine. Now, it's good practice to limit RAID5 arrays to 5-6 drives.
 

gwolfman

Distinguished
Jan 31, 2007
782
0
18,980

True. But putting these drives in RAID 5 or 6 would result in more writes, which current SSDs are not good at, so that would probably make them worse than they already are (for writing that is).
 

rockyjohn

Distinguished


Does that mean that two SSDs in RAID 0 would provide better data security - at least relative to hard drive failure - than two HDs in raid 1?

That would be great. Then I could put all my software and basic data on the SSDs and use one large HD to do a weekly backup of the basic data and store files that don't need to be backed up - e.g. audio and video. Now all I need is for the prices to drop to about half of where they are.
 

Egregious

Distinguished
Aug 14, 2007
2
0
18,510
I would actually be interested to see if one of these SSD drives would work well in a raid 1 with a normal drive so that the system could take advantage of the performance benefits from each drive for that server environment. Raid 1 usually need drives of similar performance but I'm just curious.
 

Sq7

Distinguished
Aug 11, 2006
4
0
18,510
Well I'm disappointed. Very, very disappointed.
I don't like ranting and raving, but hear me on this please. One drive costs say $500 (I don't recall the exact prices and I'm lazy to check). Sandisk was nice enough to send two drives to test in RAID configuration. It was however already mentioned in this discussion that solid state drives are very scalable as far as the number of flash chips on a single drive is concerned. Also that the performance would be directly affected by the number of chips running in parallel. So why not just double the number of chips for a single drive rather than limiting them and requiring from the consumer to buy several drives to get RAID performance? It is a blatant bid to get people to upgrade one small step at a time again as has been the case with many other types of hardware in the past. The only difference is that the technology is so much simpler. Forget about capacity. I am interested in performance. I was expecting more. I have no doubt in my mind that they are starting at the very minimum of what is possible right now. Just good enough to blow conventional hard drives out of the water. And I don't like it. I don't like being patronized into spending my money in such a way. All it will lead to is me waiting for a decent offering to come along. I will never buy a product I don't believe in.
To the manufacturers: Get off your profit hungry behinds and give us what we deserve. Spend money on good honest R&D to scale the technology by what you are capable of providing. Not by what people might be willing to put up with. Flash memory is well advanced and cheap at this stage. The pudding should be in how well you use it. And with improvements to the actual flash memory technology in the future so much the better.
Cheapskates
 

gwolfman

Distinguished
Jan 31, 2007
782
0
18,980

Very interesting, I wonder how that would work. It'd be nice to see the results though. :)
 

mr_fnord

Distinguished
Dec 20, 2005
207
0
18,680


These flash chips are 2GB apiece. I think that's as state of the art as it gets right now. I don't think it would be cost effective for them to use 256 128MB chips, and the largest capacity chips will be the newest tech with the fastest read and write times, so using more lower capacity chips would not scale the same.

They could scale up to 64, 128, etc GB with more chips, but if people aren't going to buy a $500 HD replacement they probably won't buy a $1-2K HD replacement either.
 

mr_fnord

Distinguished
Dec 20, 2005
207
0
18,680


Most RAID algorithms are based on identical HW, writing a block to each drive, getting return results, writing another block to each drive, etc. Mismatched HW would move you to 2x the slowest drive at best, since the controller would be waiting on the slow drive all of the time.

Before NCQ was integrated into HDD's RAID controllers used to perform similar functions, and some high end RAID setups used HDD's with synchronized spindles, so that HD1 and HD2 were always reading or writing the same block off of a drive with no seek differential. Software RAID and new low cost RAID controllers ignore all of those mechanical optimizations, and that's part of the reason why a PERC or other high end RAID controller will have greater performance with the same drives.

Different algorithms might be able to take advantage of an HD/SSD combo, like Vista's Speedboost or whatever. Also, the hybrid HD/SSD's that have been predicted in tech news posts could be very high performance with the right algorithm. If you did have a 4GB SSD and a 100GB HD in one device there would be a learning curve on the algorithm as it figured out what sectors were hi read/lo write, so it would be hard to benchmark too..

Of course, the hybrid devices might never come out, and SSD might just become the HD replacement in a couple of years.