Samsung Cramps 24-SSD RAID Experiment

Status
Not open for further replies.

PhoenixBR

Distinguished
Mar 10, 2009
1
0
18,510
We only need 2 Microns SSD to get equal performance and 3 to surpass it.

"TG Daily - 26/11/2008
Chicago (IL) – Chip manufacturer has demonstrated what is, at least to our knowledge, the fastest solid state disk drive (SSD) demonstrated so far. A demo unit shown in a blurry YouTube video was hitting data transfer rates of 800 MB/s and can expand to apparently about 1 GB/s. The IO performance is about twice of the best performance we have seen to date."
 

spazoid

Distinguished
Mar 10, 2009
34
0
18,530
All the info the article states as lacking, is at the end of the video. Excessive you of the pause button will reveal to you that they use an Areca, Adaptec and the onboard controller(s) to achieve a total bandwidth of 2000+ mbyte/second.

All other info you might want about the setup is also there.
 

Themurph

Distinguished
Feb 6, 2009
5
0
18,510
@spazoid Good catch, Spazoid! I didn't even see this bit after the video's little celebration.

They're still running quite a strange RAID setup though: using two controllers and onboard motherboard connections to, what, create one giant RAID of drives? Surely there has to be some performance loss from splitting the drive connections up as they do.

Also, -15 points for the "pause to see how we did it" deal. Ugh.
 

nihility

Distinguished
Dec 16, 2006
41
0
18,530
Watch the video to the end, they tell you exactly which RAID cards they used.

The say they had 10 drives hooked up to a 24 port card, another 8 hooked up to an 8 sata port card and another 6 plugged into the motherboard.

They also state that with the 24 SSDs all hooked up to one card they were getting a serious bottleneck so they instead used the aforementioned setup.

The video is pretty awesome IMHO. When they opened up 54 programs in a bit over 10 seconds it blew my mind.
 

hellwig

Distinguished
May 29, 2008
1,743
0
19,860
[citation][nom]MasonStorm[/nom]How does one set up a RAID array spanning three different controllers?[/citation]
The right software will raid any harddrives connected to the system, regardless of controllers or even interface. I agree with the article that it was probably RAID 0. Any sort of calculation dependant on the CPU would have greatly reduced their throughput.
 

MasonStorm

Distinguished
Oct 28, 2008
4
0
18,510
Hi hellwig,

What would be some examples of such software, and are there any that would allow such a created array to be used as a boot drive?
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
> The say they had 10 drives hooked up to a 24 port card, another 8 hooked up to an 8 sata port card and another 6 plugged into the motherboard.

So, the ceiling was not dictated by a single x8 slot (2GB/sec),
but by the PCI-E lane assignments made by the BIOS and the chipset.

What do we get if we go RAID-SLIorCROSSFIRE with 2 x RAID controllers
each using x8 PCI-E lanes, or preferably 2 x8 PCI-E 2.0 lanes.

Highpoint's RAID controllers can be "teamed" in that fashion.

Do we then run into the same ceiling, or not?

Inquiring minds would now like to know.


MRFS
 

hellwig

Distinguished
May 29, 2008
1,743
0
19,860
[citation][nom]MasonStorm[/nom]Hi hellwig,What would be some examples of such software, and are there any that would allow such a created array to be used as a boot drive?[/citation]
I doubt you could boot off such an array, its a purely software implementation, meaning something has to be running the software.

That said, I don't have specific examples (never done it myself). Many OSes can implement RAID on their own: http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks#Implementations

This article here on Toms tells how to setup RAID 0 or 1 in Windows XP: http://www.tomshardware.com/reviews/raid-additional-hardware,363.html

This guy claims to be able to hack Windows XP into doing RAID 5: http://www.jonfleck.com/2009/02/24/low-cost-and-reliable-network-attatched-software-jbod-raid-0-1-or-5/#more-934

I'm sure there are third-part apps out there that implement this as well, but I wouldn't know where to look.
 

mapesdhs

Distinguished
MasonStorm writes:
> How does one set up a RAID array spanning three different controllers?

For hw RAID I guess it depends on the cards and management sw.

For RAID0, it's easy to do this on certain systems, eg. under
IRIX, using 3 x QLA12160 (6 disks per channel, SCSI controller
IDs 2/3, 8/9, 10/11), optimised for uncompressed HD, it would be:

Code:
  diskalign -n video -r8294400 -a16k '/dev/dsk/dks[p0,2,8,10,3,9,11]d[8-13]s7' | tee xlv.script
  xlv_make < xlv.script
  mkfs -b size=16384 /dev/xlv/video
  mkdir /video
  mount /dev/xlv/video /video

(I hope the text formatting works for the above)

That gets me 511MB/sec sequential read using a bunch of old/slow
Seagate 10K 73s, on an Octane system more than a decade old. With
modern SCSI disks, I get the same speed with just a couple of
drives per channel.

I should imagine Linux and other UNIX variants have similar
sw tools, but I don't think Windows offers the same degree of
control.

Ian.

 

fiskfisk33

Distinguished
Mar 10, 2009
7
0
18,510
why are you guessing what they used?
if you watch the vid in hd and pause at the end you can read it perfectly :p

they had
10 drives connected to an 'areca 1680ix-24'
8 to an 'adaptec 5 series'
and 4 directly to the mobo.
 

graviongr

Distinguished
Jan 21, 2008
40
0
18,530
I also read all the pause screens, it also says they disabled all optical drives. So you can have a super fast system but you can't watch a DVD lol.

Pointless.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
> they disabled all optical drives

I use that chassis: They let the SSD's "all hang out";
as such there was no need to install them in 24 x 2.5" drive bays.

It could be done, however, with 4-in-1 enclosures
like the QuadraPack Q14, and this Athena unit I
recently purchased from Newegg:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816119006

6 x 5.25" bays @ 4 x SSDs each = 24 SSDs total

That Thermaltake Armor chassis has 11 x 5.25" drive bays:

http://www.newegg.com/Product/Product.aspx?Item=N82E16811133021
(see photos)


MRFS

 

ossie

Distinguished
Aug 21, 2008
335
0
18,780
Kind of counterproductive to use 2 expensive HW RAID controllers and SB-SATA for a big array.
A better solution for higher performance and lower cost would have been 3 PCIe-x8 8 port SAS HBAs with SW RAID.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
Yes! 3 x x8 PCI-E 2.0 8-port SAS HBAs.

http://www.highpoint-tech.com/USA/rr4320.htm

The RocketRAID 4300 series is a PCI-Express x8 SAS RAID controller that supports up to 8 SAS/SATA hard drives with the industry fastest Intel Xscale IOP348 processor at 1.2 GHz. The RockeRAID 4300 series is effciently maximzed with HighPoint’s 2nd Gen. TerabyteArchitecture™ to deliver the highest sustained transfer rates with over 1004MB/s read and 913MB/s write and maximum data protection. With a PCI-Express x8 interface the RocketRAID 4300 series offers the maximum bandwidth for the highest data transfers.


http://www.highpoint-tech.com/PDF/RR4320/RocketRAID4300_Datasheet.pdf

Multiple card support within a system for large storage requirements


[end quote]

MRFS
 

dare jang

Distinguished
Mar 11, 2009
2
0
18,510
Finaly we got some pepole that will toy with the ssd's and got enough of em.
i read somwere that the ssd does arround 200mb/sec thats a theoretic 4800mb/sec for all the 24 ssd.
pci 8x is like 2000mb/sec if i understand correct.
So what i see is a bottel neck. ssd to system.
Is there enyone here that could sugest a insane interface to the system.
im gussing on some insane priced server board.
i dont know how san's and stuff work or connect to servers.

Question is so. what is the optimal way to get the ssd power into the system?
Do enyone have acces to these kind of hardware?
finaly: can enyone "tube" a video of there test.

Let the nerds have fun. lets get some super tests of ssd's and lets drool.
It will help future enthusiasts to build there systems.

Finally an apolegy those hwo read this far. im danish, hence my BAD ENGLISH.

Current system:

core2duro 3ghz @ 3.21ghz
2 gb croshair ddr2 6400 @ 961mhx
Asus maximus formula.
7800 GTX (clocked too)
4x old samsung 250gb i diferend raid modes
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
> Question is:
> So what is the optimal way to get the ssd power into the system?


We're just using a standard AT-style PSU from Antec,
in a chassis that holds 2 x PSUs i.e. Cooler Master HAF-932.

If you shop around, there are several manufacturers
who build cases with room for 2 power supplies,
e.g. Lian-Li, Silverstone, Antec, Cooler Master, NZXT.

Those older AT-style PSUs have their own power switch,
so it's very easy to leave the SSD subsystem powered UP,
even if the motherboard is shutdown.


> pci 8x is like 2000mb/sec if i understand correct.

Not quite: "PCI" is the old standard of 32 bits @ 33 MHz
= ~133 MB/second (8 bits per byte).

It stands for Peripheral Component Interconnect.

"PCI-E" is the correct abbreviation for PCI-Express --
the latest standard.

Each PCI-Express "lane" supports 2.5 GHz in each direction:
at 10 bits per byte (serial protocol) that's 250 MB /sec
in each direction. With serial protocols, there is one
"start bit" and one "stop bit" in addition to 8 data bits,
for a total of 10 bits per logical byte (or charter).

So, yes, an x8 lane PCI-Express slot should support READs at
250 MB/sec. x 8 = 2,000 MB/second (theoretical bandwidth,
no overhead anywhere). Call it MAX HEADROOM :)

MRFS

 
Status
Not open for further replies.