Question How do the data transfer speeds all work an interact with each other on a NAS

sceptism

Prominent
Aug 6, 2019
18
0
510
Hi all,

I'm recently trying to learn more about the NAS and some of the things confuse me.

So there are a few things that confuse me:

1. The types of cables used e.g. Cat 5, Cat 6. I know that they have different speeds and bandwidth and they are twisted in different ways, that's about it

2. Network ports on the NAS, so there are 1Gb/s ports and like 10Gb/s ports

3. The drives used, SSD vs HDD

So my question is about how these 3 things interact with each other and what is the bottleneck.

As an example, if I have want to setup a NAS hosted on local network using a router and having 5-6 people access this NAS, and I set it up using a router hooked up to an 8-port switch that links the NAS with 1Gb/s port, and the 5 users that want to access this NAS on this local network what would be my bottleneck be at?

My current understanding is that my theoretical top speed is capped by the 1Gb/s port but even if I change it to a 10Gb/s port, switch and add 10Gb/s network cards to my PCs, will I still be bottlenecked by my HDD speed of 1.6Gb/s (200MB/s)?

Also, does bandwidth play a part here since each of my PC is connected directly via 1 ethernet cable and the NAS is connected directly by an ethernet cable to the network switch?

Sorry for the wall of text and if it's too confusing. I'm really quite confused myself with all this information. Just wanted to get my facts right. Thanks to any kind soul who can help me see the light in this dark and NASty situation ...

For the sake of visualisation:



Router <-> Network Switch <-> NAS + 5 PCs (each connected to the switch via an ethernet cable)
 
Fastest possible speed for any connection is the limit of the slowest connection.

Cable type is irrelevant, that is just carrying the signal. Different types allow for different maximum connection speeds, doesn't change what the hardware can do.

Obviously for best results, use a 10Gbps connection to the NAS and have all connections between the things you want to move data that quickly also use a 10Gbps connection. If the top speed of the drives is 1.6Gbps (likely sequential data only) than 5 users are easily going to be able to max that out.
So connecting to a 1Gbps switch will not make a huge difference.

SSDs would be faster, but to really use up all that bandwidth you would need really fast drives like NVMe SSDs.

Speed of the drives of the receiving computers would also need to be fast enough. So if they are also spinning drives, then there is not much point.
 
Fastest possible speed for any connection is the limit of the slowest connection.

Cable type is irrelevant, that is just carrying the signal. Different types allow for different maximum connection speeds, doesn't change what the hardware can do.

Obviously for best results, use a 10Gbps connection to the NAS and have all connections between the things you want to move data that quickly also use a 10Gbps connection. If the top speed of the drives is 1.6Gbps (likely sequential data only) than 5 users are easily going to be able to max that out.
So connecting to a 1Gbps switch will not make a huge difference.

SSDs would be faster, but to really use up all that bandwidth you would need really fast drives like NVMe SSDs.

Speed of the drives of the receiving computers would also need to be fast enough. So if they are also spinning drives, then there is not much point.


Hey @Eximo thanks for the reply. It clears things up for me. Just to make sure I understand the point about 5 users and the 1.6Gbps speed of the drive. If it's sequential reading, is the disk just reading 1 task at a time even if 5 people are asking it to read different files at different locations? So on a 10Gbps connection, when you say maxing out, do you mean maxing out at 1.6Gbps (before moving on to the next user's request) or 5 x 1.6Gbps (parallel processing?).

Just making sure I'm understanding your answer correctly. Thanks again!
 
The drive doesn't get magically faster when you demand more from it. It's limit is whatever it is. Sequential refers to the data itself. On a hard drive the easiest thing to access and deliver near the rated speed of the drive is contiguous data. If you have 5 users all making requests at the same time for different information then it will be much slower as the read/write head has to move around and wait for the platter to revolve and place the data under the head.

So having a 10Gbps connection is extreme overkill if the drive's maximum potential is 1.6Gbps. That only gives you the capability to run at that speed, not that you will achieve it.

So unless your NAS contains much faster drives, it will be about as fast as having the hard drive connected directly to the machine. Assuming the receiving system/drive can receive at that rate.
 
with spinning drives when you stripe data across them it increases your seq read/write (big files) linearly. raid 5,6,10,50,60, 0 are all stripped. SSD random r/w (small files) is the big reason they are preferred for certain nas. even if you stripe 10 sata3 ssd you probably still can't read @ 1Gbs random 4k QD1. you have to have a very busy server to ever go much above QD1. This random read benchmark is the most important and also the weakest in all drives. random writes get a speedup from using ram.

each client will be network limited and disk limited. there are some model switches that have both 1Gbs and 10Gbs. so multiple clients can r/w 1Gbs to the NAS if it's on 10Gbs.

I really like ZFS on proxmox. You can add a 120GB SSD cache drive to a raid of spinning disks. this gives you good random reads on popular files, lots of storage, and data healing / fault tolerance. Speeds come down to how the lag actually impacts you. if you're just doing backups you probably will not notice. if you're trying to search in an email archive you will notice big time.
 
Last edited:

TRENDING THREADS