Micron RealSSD P320h Review: A PCIe Drive Capable Of 3.2 GB/s

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]rdc85[/nom]It using SLC and geared towards enterprise market... IMO it understandable price...[/citation]
I run Enterprise SQL -- it's not 'reasonably' priced.

Reason there's no (zip) redundancy so at minimum (2) RAID 1 = $20/GB or (3) RAID 5 = $30/GB or better (faster) (4) RAID 10 = $40/GB.

Therefore, I can have a 'bunch' of RAID 10 SSD's with a similar (slower no doubt) R/W IOPS and for a lot less money. Additionally, cost must me taken into consideration as 'substitution' ultra-fast PCIe SSD vs an additional/parallel server. A trick most SQL folks do is to use a RAM Drive (no redundancy) or better what I do use a RAM Drive for the TEMP file.

Redundancy is crucial for most Enterprise applications, but I have no doubt there's a market for these 3GB/s PCIe SSD's. Imagine a 4-6 hour (overnight) batch run and you have a drive failure, it makes a 'single' drive (SSD) cost money not to mention a full day failure as a disaster! Therefore redundancy is a must have with no option for mission critical data.
 

zakaron

Distinguished
Nov 7, 2011
105
0
18,680
[citation][nom]hytecgowthaman[/nom]Wow what a speed !!! if I am use this for os how much second take for to get logon screen . but cost = high performance.[/citation]

Honestly, it won't be any faster than an off the shelf consumer $150 SSD. Like the article stresses, this device is engineered for a specific target base and really shines when put under heavy load. That high queue read perfomance is where this device makes its money. That and the endurance/reliability factor.
 

hapkido

Distinguished
Oct 14, 2011
1,067
0
19,460


Not that I'm disagreeing with your post, but RAID 10 prices would be the same as RAID 1, $20 / GB, it would just require buying at least 4 drives. And doesn't RAID 5 use only one parity drive?

Where x = $ / GB of one drive (10 in this case)...
(n * x) / (n-1) = $ / GB

So if you bought for RAID 5...

3 drives = $15 / GB
4 drives = $13.33 / GB
5 drives = $12.50 / GB
 
Jaquith skimmed through the article. This has RAID5 level redundancy built into it (7+1), no need for anything resembling RAID1/10 and actually I HIGHLY doubt that would work very well.

You put one of these into a front end application / back end database server, or go really wild and use it inside a VCenter solution. One server might have trouble generating 256 simultaneous requests but ten to sixteen could easily keep this card busy. You then maintain a shadow copy of everything on your enterprise storage fabric just in case the care has issues (we are talking multimillion USD enterprise implementations here and not someone's Photoshop box). Virtualization and Vmotion allows you to have zero downtime when / if something overly bad happens. Gotta love the wonders of modern enterprise implementations.
 

ychen33

Honorable
Oct 2, 2012
1
0
10,510
The way to calculate the over-provision seems inconsistent between Micron's and the rest. I am always confused which one is correct. Yes, this is a wonderful review that packs a lot of information.
 
[citation][nom]jaquith[/nom]I run Enterprise SQL -- it's not 'reasonably' priced. Reason there's no (zip) redundancy so at minimum (2) RAID 1 = $20/GB or (3) RAID 5 = $30/GB or better (faster) (4) RAID 10 = $40/GB. Therefore, I can have a 'bunch' of RAID 10 SSD's with a similar (slower no doubt) R/W IOPS and for a lot less money. Additionally, cost must me taken into consideration as 'substitution' ultra-fast PCIe SSD vs an additional/parallel server. A trick most SQL folks do is to use a RAM Drive (no redundancy) or better what I do use a RAM Drive for the TEMP file.Redundancy is crucial for most Enterprise applications, but I have no doubt there's a market for these 3GB/s PCIe SSD's. Imagine a 4-6 hour (overnight) batch run and you have a drive failure, it makes a 'single' drive (SSD) cost money not to mention a full day failure as a disaster! Therefore redundancy is a must have with no option for mission critical data.[/citation]

I'm pretty sure that RAID 5 with three of them would be about $15/GB and RAID10 with four of them would be $20/GB if one of them is $10/GB. RAID 10 with four drives has the capacity of two drives and RAID 5 with three drives has the capacity of two drives IIRC.
 


Why do consumer benches on an enterprise SSD? That'd be like running gaming benchmarks on a Quadro, IE completely not what it's meant for.
 


I highly doubt you can "RAID" them, not in the traditional method.

When we discuss drives and "RAID" we need to look at the storage topology. Typically it is.

Host -> HBA -> Disks (SAS / FCAL / SCSI)
Host -> Disks (SATA / IDE)

When we run disks in a fault tolerant configuration it's done at the HBA level, or having no HBA it's done at the host level (fakeRAID). This card is not a disk, nor is it a set of disks, its a custom HBA with eight SSDs hard wired to it via 32 channels. That is how they get their performance, by running a custom chip with it's own firmware (miniature OS) and directly accessing the storage banks vice utilizing a industry standard disks communication protocol.

The topology of the card is

PCIe -> Custom HBA -> 8 x SLC arrays (4 channels per array) (no interface chip just the SLC chips wired to the HBA).

The firmware on the HBA runs the SLC arrays in it's own fault tolerant mode where parity is distributed amongst the arrays in a RAID5 7+1 configuration. This is done transparently from the OS.

Now seeing that, you can't utilize a HW HBA based RAID due to the card having its own HBA. Software based RAID would be highly undesirable as the card has it's own software doing parity distribution and data mapping, attempting to preempt this would cause performance degradation. When we run multi-level (10/50/51/60/61) RAID it's with the storage solution knowing exactly whats going on. Trying to trick or outsmart the custom controllers firmware would be a ~very~ bad idea.

The only solution to providing another layer of redundancy is to do at the file system block level and not at the binary disk level. Something like ZFS would allow you another layer of redundancy (two or more cards) without interfering with the cards custom HBAs.
 


For software RAID, wouldn't you not be interfering with the hardware RAID of the PCIe disks? I don't think that it'd interfere nor be anything like overriding the custom HBA in any way, granted I don't claim to be an expert on that. I'd think that since the custom HBA does its job transparently from the OS, the OS does not affect it. You'd simply be writing/reading data normally as far as the custom HBA should be aware, at least that's what I get from this. If this would cause issues, then I'd think that even using the drive as anything other than a single file system (IE multiple partitions) would cause problems too.

I realize that hardware RAID between multiple such PCIe drives is probably impossible, but software RAID doesn't seem like an issue.
 



That's why I mentioned the different levels of access. The custom FW dynamically assigns blocks as part of it's wear level algorithm while maintaining the RAID5 parity structure. Trying to do a "software RAID" on top of that would have the software driver trying to assign blocks of data, something the HBA already wants to do. Very bad things happen when you start doing that. If there was a way for the software disk driver to "see" the actual block structure of the disks under the HBA then it would be possible, otherwise with the physical structure being obfuscated it's not safe. You run into this problem with SAN's, the Storage Processor (SP) is what directly controls the disk arrays and sections out LUN's to be assigned to hosts. If I were to create four 40GB LUN's and map them to the same host, then attempt to have that host create a RAID5 out of them, it would create some severe performance and reliability issues. Instead the better idea would be to create a single 160GB RAID5 LUN and assigned that to the host for use.

This comes from the fact that data storage is a layered system. Data isn't just barfed onto a disk, it's first organized, cataloged and subdivided into a logical structure. The disks themselves are not just raw storage devices, their first organized into logical structures (formatted, high and low) then the file system structure is laid down, then the data and metadata are laid down on top of them. Trying to have multiple independent redundancy mechanisms at the same layer tends to cause conflicts and problems. This is why ZFS is so important. It incorporates redundancy into the file system layer rather then at the block layer. "Software RAID" wouldn't be a good idea on two of these PCIe devices but a ZFS would work just fine.
 


It's my understanding that SSDs literally do just throw data all over transparently of the OS (unlike hard drives) because of wear-leveling and that the OS is completely unaware of where data is actually stored; only the controller knows where any given data is located on the flash chips. It's also my understanding that software RAID (at least done through Windows) is done at a file system level, although I'm less sure of that than my first statement in this post.

Maybe I've got it all wrong, but I'd think that SSDs wouldn't run into any issues like hard drives would because their incredibly low access times would minimize any issues caused by the increased randomness in where data is located. SSDs are a whole other animal from hard drives when it comes to that and my own minor tests with partitioning and software RAID on SSDs and hard drives have shown that enough (divide a hard drive into four partitions and then put them all in software RAID 5, compare to the same done on an SSD, and other stuff like that; my SSDs have handled it very well whereas my hard drives slow to a crawl).

I'm not disregarding that you shouldn't need any RAID on these PCIe SSDs because they already have it built it, I'm just saying that it should be possible without heavy performance losses AFAIK. As far as the computer is aware, it should treat software RAID as on a level above that of the HBA's hardware RAID, not as on the same level. I'll give something a try later where I'll take two hardware RAID 1 volume drives and put them in a software RAID 0 if I can scrounge up enough spare hard drives and SSDs to compare how they handle it. Would that be at least a decent reproduction and experiment for how these PCIe SSDs would function in software RAID?
 
It's my understanding that SSDs literally do just throw data all over transparently of the OS (unlike hard drives) because of wear-leveling and that the OS is completely unaware of where data is actually stored; only the controller knows where any given data is located on the flash chips. It's also my understanding that software RAID (at least done through Windows) is done at a file system level, although I'm less sure of that than my first statement in this post.

SSD's have their own wear level algorithm but their not trying to generate and distribute parity data.

And no, "software RAID" is block level raid just like hardware RAID is. The only two differences are instead of a dedicated XOR co-processor calculating the parity data your CPU does it, and instead of DMA having the data go straight from memory to HBA to disk it has to stop inside the CPU to be directed.

Data is treated like this,

File Name / Tree Structure (name and hierarchical organization)
Data + Meta Data (byte data and locational / access times / other data)
Block Data (logical blocks that hold Data + Meta Data)
Physical Data (actual CHS location of Block Data)

HBA's do RAID at the Block Level by distributing the blocks over various physical disks and rotating the parity data. Software RAID (fakeraid) does the exact same thing by simulating an HBA in software.

ZFS does redundancy by distributing the Data / Meta Data amongst different blocks and rotating parity information around (it ensures the blocks are on different physical devices). What the physical devices do with those blocks is of no concern to ZFS, provided their always made available when requested.

You keep confusing your home SSD's with this card, don't do that. SSD's don't do block level parity distribution, this does. You end up encoding and rotating parity data twice (once in software and once again inside these devices). Wouldn't be bad if the two parity generation mechanisms could talk to each other or knew of each others existence, otherwise bad juju.

Remember the purpose of RAID is to create redundancy, to provide greater protection and lower recovery time. Introducing that kind of unknown parity mechanism creates the potential for data corruption and longer recovery times combined with performance issues. It's one of those rules that you never break in planning an Enterprise data plan, kind of like "don't cross the streams".

If you had a method of disabling these devices built-in RAID5 you could in theory RAID them together. Wouldn't be preferable as you'd be relying on the HBA's to do block distribution but at least you no longer have to worry about conflicting parity distribution. Honestly the best thing to do with devices like these is to not try to reinvent the wheel or out smart them. Let them do their job and if you need / want data protection then arrange for a shadow copy to be kept on slower / cheaper Tier II / III storage.
 
I'll give something a try later where I'll take two hardware RAID 1 volume drives and put them in a software RAID 0 if I can scrounge up enough spare hard drives and SSDs to compare how they handle it. Would that be at least a decent reproduction and experiment for how these PCIe SSDs would function in software RAID?

Not really no, even though your creating two distributed block levels, there is no parity data. Best method would be to build two RAID5's and attempt to RAID 1/5 them.

I've dealt with systems that attempted to do this. RAID5 luns that are then grouped into another RAID by the host, caused nothing but issues. Ended up with lots of wasted space / performance with less redundancy then if they would of just used a larger RAID array to start with.
 

Rabin Pro

Honorable
Apr 8, 2013
36
0
10,530
There should be some kind of lawsuit against companies who make new and innovative technology so expensive that they actually slow down the whole development process. I bet it doesn't cost more than a 100 bucks to manufacture this thing. They aren't aware of the fact that if they sell millions of those units, they are gonna make more profit and goodwill than by putting an impossible price tag on them. Loosers
 

Rabin Pro

Honorable
Apr 8, 2013
36
0
10,530
There should be some kind of lawsuit against companies who make new and innovative technology so expensive that they actually slow down the whole development process. I bet it doesn't cost more than a 100 bucks to manufacture this thing. They aren't aware of the fact that if they sell millions of those units, they are gonna make more profit and goodwill than by putting an impossible price tag on them. Losers
 

lucxxxx

Honorable
Jul 16, 2013
1
0
10,510
Can someone please check the pie charts. Can't do simple math? 0.78x shown as less than 3/4? Please give us a break. Stop the insults.
 
Status
Not open for further replies.