Up To 8TB Of SSDs In RAID 0: Highpoint Ultimate Acceleration Drive SSD7100, First Look

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
Hmm, , i wonder what benchmarks would be for a similar nvme array would be on the native nvme of TR or Epyc.

I suspect the results are similar.

All sounds suss. They imply they are avoiding the limitations of pcie3, yet they are using a; pcie3 hardware/interface/device, a samsung 960 Pro.

I think all intel is doing, is remedying a deficiency in their native nvme ports. The slide u show of intel PCH is a disgrace to Intel - what lies? To pretend to have all those ports available, yet they havnt a hope of working concurrently. Either of the 2x nvme ports they claim would completely max out the 4 lanes on the PCH. Its underhanded deception.

It also bears noting this card isnt free, but a canny amd x399 mobo shopper, could get at least 3 full strength 4 lane nvme ports included - gleaned from the few moboS i have seen.
 


Please justify your pricing thoughts.

If you have need of 8TB SSD in RAID 0, and have the relevant backup situation required for a RAID 0...the difference between $200 and $400 for this interface card is not even a flyspeck.

That $200 that you think it is over priced by is less than a single days pay for the tech who will install and manage this.
Trivial.
 


If you believe it is overpriced -- THEN DON'T BUY ONE. If there are enough people of the same mindset then the vendor will either have to lower the price or will fail as a company. The vendor has done market research and believe that they can MAXIMIZE their profits at the price they have set.
 


OK...so what?
(even if that were actually true)

Their marketing dept has put a "retail" price of $400 on it.
As said..that theoretical MSRP $400 means exactly squat.

You don't just 'buy this' and slam it into your uber gaming PC, along with another $2,000 of SSD's.

Your consultant simply provides you with a solution that meets your data and performance needs, of which this thing is a part of. Probably not even an individual line item on the invoice.
 


Eventually, concepts like this filter down to the consumer level.
Just like NVMe
Just like SSD
Just like HDD
Just like 64bit/32bit/16bit....

That this particular thing is $400 is meaningless.


Lastly, I invite you to describe your particular use case that justifies 4 x 256GB NVMe drives in a RAID array.
Actual numbers (not theoretical benchmarks) with and without would be helpful...

The real world jump from floppy drive to HDD was huge. Speed AND size
The real world jump from HDD to SSD was huge, in terms of speed
The real world jump from SSD to NVMe, big, but not as large as before.
SSD + RAID 0? Useless, in the vast majority of actual use cases
NVMe + RAID 0? So far, useless in the vast majority of actual use cases.
Optane? A rehash of IRST, just with a faster cache tech.

Now...if you're a movie production house, in early development of Cars 5, or Toy Story 8....this may be useful to you now.

Otherwise...it is simply "nice to know about".
 
"can't boot from the software array"

Oh, gosh. 1 OS 1 limit and 1 world. I knew it. All this because you want to make Windows somehow faster and extend the times between reformats.
 


I guess you missed the part where I said exactly the same thing.

Whatever...I have a deck to go finish deconstructing before it rains again.
 
RAID 0 is so risky though, even more so than a single drive because you're increasing the potential for failure over multiple drives. Make frequent backups.
 
It's really too bad that Microsoft has not developed an option
to boot from a Windows software RAID e.g. by initializing
a USB thumb drive that informs the chipset that the OS
is resident on that software RAID. With so many multi-core
CPUs proliferating, one idle CPU core is a fine replacement
for most hardware RAID HBAs.
 
NVMe RAID support

AMD has also announced a free update to the X399 platform – which Threadripper CPUs run on – which brings NVMe RAID support, allowing for incredibly fast data transfer speeds. This update includes NVMe RAID drivers, and supports bootable RAID 0, 1 and 10 modes for up to 10 devices.

The free update will be made available on September 25.
 
I'm making an educated guess here that AMD is possibly planning to support one or more add-in cards
that have up to 4 x M.2 ports e.g. Highpoint SSD7101A-1 (there are other vendors with similar solutions).
This would make more sense, in the short term, than requiring new X399 motherboards
with 4 or more M.2 / U.2 ports. The whole idea of PCI-Express was to accommodate expansion cards.

 
Here's the email message I sent to Lisa Su, AMD CEO, back on 12/14/2016:

Greetings Lisa Su:

VERY GOOD SHOW!

Here's a suggestion for yet another way to beat Intel
at their own game: OEM Highpoint's model 3840A
NVMe RAID controller:

http://highpoint-tech.com/PDF/RR3800/RocketRAID_3840A_PR_16_08_09.pdf

I'm told by Patrick Kennedy at www.servethehome.com
that Highpoint are looking for an OEM partner.

Here are the numbers on which Highpoint and we agree:

http://highpoint-tech.com/USA_new/CS-product_nvme.htm

"... a single RocketRAID 3840 RAID controller is capable of delivering
up to 15,760 MB/s of transfer performance!"

Our calculations:

1 x NVMe PCIe 3.0 lane = 8 GHz / 8.125 = 984.6 MB/sec

4 x NVMe PCIe 3.0 lanes = 4 x 984.6 MB/sec = 3,938.4 MB/sec

4 x NVMe SSD in RAID-0 = 4 x 3,938.4 = 15,753.6 MB/sec

As such, the latter raw bandwidth exceeds the raw bandwidth
of DDR3-1600 DRAM:

1600 x 8 = 12,800 MB/sec.


Keep up the good work!
 
​supplement to our earlier message to Dr. Su now follows:


It will be WONDERFUL if future AM4 chipsets will
support 4 x U.2 ports at full speed, with ALL modern
RAID modes and withOUT the upstream limits
imposed by Intel's DMI 3.0 link.

...

I've been an Intel fan for many years. Nevertheless, the DMI 3.0 link
on their latest chipsets is forcing a low ceiling on the max bandwidth
of high-performance solid-state storage. I think AMD would do well
to adopt of policy of embracing the NVMe standard, and at a minimum
begin to design AMD CPUs and matching chipsets that support a full
x16 lanes to NVMe RAID controllers with x16 edge connectors. The main
design reason is the symmetry that results from four U.2 ports @ x4 PCIe
lanes (4 @ x4 = x16), and the advantages that come with support for
all modern RAID levels. Remember also that PCIe 4.0's clock will
oscillate at 16 GHz, and upcoming non-volatile DRAM solutions promise
to require much higher upstream bandwidths.
 
Just found this:

http://www.overclock.net/t/1634463/thw-highpoint-ultimate-acceleration-drive-ssd7100-4tb-of-960-pro-ssds-in-raid-0/20

jsutter71 writes:

I have some awesome news regarding the bootability functions of the Highpoint SSD7101A. As I previously stated Highpoint did not support bootability functions for this device but in an email to me said they were working on it. Then they released another RAID manager, the SSD7110 which did support this feature. The 2 devices are very similar in nature with the exception that the SSD7110 also provided SAS/SATA ports . When they first released this device I attempted to replace the SSD7101A's drivers with the SSD7110's drivers in hopes that I could trick the device to support bootability. This did not work.

Well the situation has now changed. I just downloaded the latest SSD7110 drivers and after trying the same thing again I am pleased to report that the SSD7110 drivers which enable the bootability functions work with the SSD7101. Remember that Highpoint still states that the SSD7101 does not support this feature, and that the only 2 drives it supports are the 960 evo and 960pro. I blew that out of the water from the start by using my SM951's.
 
Status
Not open for further replies.