Question MB+CPU LANES usage and RAID impact

Status
Not open for further replies.
Jan 13, 2020
12
0
10
I have RAID Adaptec 72405 + 24x WD Velociraptor WD1000DHTZ disks that I want to use as fast access storage.

I have a Asus Z170A MB + intel I5-7600K + AMD R9-280X (old but good enough) setup.

This means I have 16 PCIe lanes to use and divide amongst an NVMe M2 SSD and several add on cards; sound, extra network, another 2x RAID Adaptec 6405E single PCIe x1
__
QUESTION;
Is the RAID in the system is affected by the limitation of the system only having 16 lanes IN TOTAL to divide between graphics and all the rest?

More lanes means;
1--new MB with X299 socket like Asus WS PRO or Asus WS Sage ... (ranging 250-400euro)
2-- new I7-7800X or better (=28 lanes and around 300euro) to start with..... A "40 lanes" CPU is too expensive; cheapest 40 lanes CPU = I7-9800X and starts from +/-500.
Should I get another MB & CPU which has more lanes; to be able to use the 72405+24 velociraptor drives to their full potential?
So wanting more than 16 PCIe lanes (28 or 40 lanes) will cost me an extra 500 to 1000 euro.?!! Is it worth it?

No, AMD is not an option (also because I'm not familiar with them).
__
PS; One day I will get SSD in stead of HDD but for now the 24x1TB HDD I bought, are still a lot cheaper than any SSD.

Thanks!
 
Last edited:
the 24x1TB HDD I bought, are still a lot cheaper than any SSD.
I don't see how that's possible even excluding the cost of the RAID controller you're using (is it really $400+?), or a case that holds 24 hdds. Now you're talking about having to buy a new CPU/mobo/both to support this ridiculous array. A single 1TB SSD costs $120 and would have a MUCH lower failure rate, and MUCH lower noise output compared to 24x 10,000rpm WD raptors....

The controller is PCIe3.0 x8, so that's all you need. Your current setup will deliver that.

What do you need this throughput for? How much throughput do you actually need? (and is it reads? writes? random? sequential?)

Have you looked at AMD Ryzen CPUs with their 24 PCIe4.0 lanes (equivalent of 48 PCIe 3.0). A Ryzen 3600 is $200.
 
Last edited:
Jan 13, 2020
12
0
10
... having to buy a new CPU/mobo/both to support this ridiculous array. A single 1TB SSD costs $120 and would have a MUCH lower failure rate, and MUCH lower noise output compared to 24x 10,000rpm WD raptors....
The controller is PCIe3.0 x8, so that's all you need. Your current setup will deliver that.
What do you need this throughput for? ....

Thanks for your reply
I'm sorry you call my array ridiculous, let's keep to facts please.
1- The WD1000DHTZ (1TB) has 1.4 million h MTBF vs Samsung 850 SSD (1TB) has 2 million h MTBF
2- I got the 72405 controller + 24 drives for $500us all in
3- The drives are in a well ventilated separate box and inaudible
4- Throughput is for read & write 10MB files
5- I know the controller is PCIe 3.0 x8 but that does not answer my question at all about how the RAID in the system is affected by the limitation of the system only having 16 lanes IN TOTAL to divide between graphics and all the rest.
6- AMD is not an option -as I wrote
Thank you
 
Sorry if that offended you, but let's call this what it is.

You couldn't find a 1TB SSD for $500? You could buy a couple enterprise grade 1TB SSDs for that.

What RAID are you running. 1.4M MTBF for each drive, sure, but you've got 24. How many drives can you lose before the array is compromised? Also I doubt these are new drives? Are you into probabilities?

10MB files? Is that a typo?
 
Last edited:
you are looking at cpu lane for the gpu. you need to look at the chipset and motherboard spec
Yeah, but since the RAID card would go into the second x16 slot, wouldn't it just drop to x8/x8 PCIe3.0 for the GPU and RAID card respectively? Nothing wrong with that, the GPU performance won't drop much at all (2-3%). All the other stuff is hanging off different lanes.
 

USAFRet

Titan
Moderator
What RAID are you running. 1.4M MTBF for each drive, sure, but you've got 24. How many drives can you lose before the array is compromised? Also I doubt these are new drives? Are you into probabilities?
In a RAID 0, you can lose 0 drives.

A 01 or 10 would gain a bit of fault tolerance, but at the expense of complexity. And still lose to an NVMe drive for speed.


As others, I'm curious as to use case for this array of 24 drives, and what specific RAID type is it supposed to end up as.
 
Jan 13, 2020
12
0
10
you are looking at cpu lane for the gpu. you need to look at the chipset and motherboard spec
I did and sadly that's where the BIG bucks come in; recent Intel hardware only possibility is as follows;
1- X299 chipset motherboard (ALL models/brands cost more than the Z390/Z370/... enthousiast types)
2- CPU for these are only X types; "cheapest" $600 i7-7820X has only 28 lanes!! :-( and first and cheapest option 40 lanes CPU = $1000 i7-7900X :-( :-(
I prefer using X299 because link between the CPU & chipset = DMI 3.0 (= about four lanes PCIe 3.0) the older X99 Broadwell-E uses DMI 2.0 link.
 
Jan 13, 2020
12
0
10
...You couldn't find a 1TB SSD for $500? You could buy a couple enterprise grade 1TB SSDs for that.
---What RAID are you running. 1.4M MTBF for each drive, sure, but you've got 24. How many drives can you lose before the array is compromised? Also I doubt these are new drives? Are you into probabilities?
---10MB files? Is that a typo?
1- I thought you had some wrong idea in the beginning; I'm using 24x1TB drives = 24TB total.... as explained I can not find all together 24x 1TB SSD (+controller!) for $500us in SSD format! Most people still think SSD is the holy grail but the fact is that now 2020 anything bigger than +/- 1TB, HDD is still the cheapest option to go!
2- MTBF of these WD1000DHTZ is very good, almost the same as SSD and the fact they are running in RAID or not, does NOT change that in comparison with SSD! That is called statistics.
3- I run RAID 60 so 4 drives can be lost and to spare you the calculation; that is a total of 20TB useable space and the drives have 60% life left, which is MANY years
4- 10MB is correct

Thanks for your suggestions but I did my math...
And sadly all of this still does not answer my question;
how the RAID in the system is affected by the limitation of the system only having 16 lanes IN TOTAL to divide between graphics and all the rest...
 
Jan 13, 2020
12
0
10
Yeah, but since the RAID card would go into the second x16 slot, wouldn't it just drop to x8/x8 PCIe3.0 for the GPU and RAID card respectively? Nothing wrong with that, the GPU performance won't drop much at all (2-3%). All the other stuff is hanging off different lanes.

The controller uses only 8 lanes
GPU on 8 lanes is indeed nothing to worry about
BUT.... all the other components ALSO use lanes; NVME, networking, even the USB 3.0, .....!
There are maximum 16 lanes in total available on enthusiast motherboards
so my question;
how the RAID in the system is affected by the limitation of the system only having 16 lanes IN TOTAL to divide between graphics and all the rest.
 
Jan 13, 2020
12
0
10
In a RAID 0, you can lose 0 drives.

A 01 or 10 would gain a bit of fault tolerance, but at the expense of complexity. And still lose to an NVMe drive for speed.


As others, I'm curious as to use case for this array of 24 drives, and what specific RAID type is it supposed to end up as.

I run RAID 60 so 4 drives can be lost and to spare you the calculation; that is a total of 20TB useable space

Of course NVME is faster
24x1TB Samsung NVME = 24 x $190us = $4560us plus controller
My system with 24 x 1TB WD1000DHTZ + controller = $500us all in... thank you ;-)
PS my RAID has 20x read speed so that would be around 20 x 150MB/s = 3000 MB/s :)
 
Last edited:
Jan 13, 2020
12
0
10
Everyone;
I welcome all your suggestions about SSD vs HDD speed and HDD reliability issues and PCIe 3.0 slots, the financial aspect,... and so on but these are sidetracks and I hope/think I have covered that myself and tried to explain that to everyone.

My main question remains;
how the RAID in this total system will be affected by the limitation of a Z170 motherboard only having 16 lanes in total, to divide between graphics and all the rest

PS anyone wanting to know about "lanes" here's a good explanation;
How do PCIe Lanes work ; https://cotscomputers.com/blog/pcie-lanes/
 
Last edited:
Jan 13, 2020
12
0
10
One would prefer to connect fast extension cards like RAID directly to the CPU (1 GB/s per lane) and not via the chipset (DMI bus has 3.93 GB/s maximum bottleneck in 3.0 x4 links).
The 16 lanes directly connected to the CPU can be used for graphics but also for other goals like data exchange. Is that not correct?

I have a read speed of 3GB/s from my HDD RAID 60 ;
72405 RAID card in 8x PCIe slot.
At a later date I will get 8GB/s ;
when 1TB SSD will be cheaper I can switch over and have performance about tripled (HDD 150MB/s to 2.5' SSD 500MB/s (which gives me RAID read speeds of 9GB/s but topped at 8GB/s because of 8x slot PCIe 3.0 limits) )

But seeing the total resources of lanes being used already;
Next to this 72405 RAID card there's also the graphics card and some other cards like 2x 6405e (1 lane each), network card (1 lane), special sound card (1 lane),....
-->That looks like all available 16 lanes are in full use.

sooo... will this not affect my RAID performance?
 
Last edited:
Status
Not open for further replies.