[SOLVED] How are I/O's shared?

swineadam

Commendable
Jun 21, 2018
52
3
1,565
hello. are these shared system-wide, or on a per controller basis?

like...

if i had two drives hooked to the onboard SATA and two drives hooked to a PCIe SATA... could i perform large file transfers on both controllers and suffer little to no transfer rate degradation? (that is, of course, assuming performance isn't being bottle necked by other factors)


i was just thinking a great drive configuration would be something like...

NVMe would host the OS
PCIe controller would host 4 SSD's total. 2 each in a RAID configuration.
onboard controller would host 4 HDD's, SSHD's or SSD's. 2 each running in a RAID.

it has been a long time since messing with RAID, and i don't even know if you could run that many drives in separate RAID configurations. so, 8 drives total. but basically only 4, because each set of 2 would be running in a RAID configuration.

hell... is using RAID for perofmance even still a thing these days??? LOL
 
Solution
With 4x SATA SSDs, there is no impact.

I just tested Copying from Drive A to B, and Drive Y to Z.
4x SATA III SSD, all connected to the native motherboard ports.

prox 40GB each transfer.
Each transfer happened at whatever the drives could deliver and take, no slowdown upon the second copy starting, or speedup as one finished.

swineadam

Commendable
Jun 21, 2018
52
3
1,565
With SSD's, no, it isn't.

SSD + RAID does not stack performance like it did with HDD's.


interesting. that is really neat. :) i have some SSD's installed, but have not much considered using RAID because of wanting the capacity more. good to know i am not at any loss there ^_^

can you comment on the I/O's?

like... if a person had a single drive in the system and tried to copy over two large files from a different source at the same time, the transfer rate will go down a lot. as far as i know, this is due to 2 things... 1 being actual drive access and 2 because only so many I/O's can be consumed at one time.

so... even if you had FOUR drives in the system handling 2 separate large file transfers, the I/O limitation would still slowdown the transfer rates. (which i do actually have 4 drives installed all to the onboard)
 

popatim

Titan
Moderator
You can only hook 1 drive to a Sata cable unless your controller supports Port multiplication. If it does then you can have multiple drives on 1 sata cable but the system can access only 1 drive at a time, but can switch between them.

In Command based switching , you have to wait until the current drive's command queue has finished processing to switch to the next drive. This is rather slow.
With FIS based switching, the drive signals when it ready for more I/O and the PM controller sends/grabs data when it can. This is much more efficient.

I both cases you cannot exceed the bandwidth of the Single Sata port. For Sata 3 that equates to roughly 525MB/s (6Gbit), Sata2 = 3Gbit, & Sata1 is 1.5Gbit. Hence, Raid is not usually a good idea.

In this chart, you can see which controllers support PM
https://ata.wiki.kernel.org/index.php/SATA_hardware_features
 

swineadam

Commendable
Jun 21, 2018
52
3
1,565
You can only hook 1 drive to a Sata cable unless your controller supports Port multiplication. If it does then you can have multiple drives on 1 sata cable but the system can access only 1 drive at a time, but can switch between them.

In Command based switching , you have to wait until the current drive's command queue has finished processing to switch to the next drive. This is rather slow.
With FIS based switching, the drive signals when it ready for more I/O and the PM controller sends/grabs data when it can. This is much more efficient.

I both cases you cannot exceed the bandwidth of the Single Sata port. For Sata 3 that equates to roughly 525MB/s (6Gbit), Sata2 = 3Gbit, & Sata1 is 1.5Gbit. Hence, Raid is not usually a good idea.

In this chart, you can see which controllers support PM
https://ata.wiki.kernel.org/index.php/SATA_hardware_features


hello. thank you for your response, but i don't really understand it. and am not interested in hooking up 2 drives to the same cable.

it DOES seem like your response should offer some insight to my inquiry, but i don't understand it. let me try to reiterate...


if i hook up 2 drives to the onboard SATA of a motherboard and 2 drives to a PCIe SATA controller... could i perform a large file transfer from one drive to the other on the PCIe, and also do the same thing with the 2 drives using onboard and suffer little to no transfer rate degradation? ( that is, assuming no other factors are creating a bottle neck )

if i attempted the same thing with all 4 drives on one controller, transfer rates suffer for it. if i am correct, it has something do with how many I/O's are available.
 

USAFRet

Titan
Moderator
With 4x SATA SSDs, there is no impact.

I just tested Copying from Drive A to B, and Drive Y to Z.
4x SATA III SSD, all connected to the native motherboard ports.

prox 40GB each transfer.
Each transfer happened at whatever the drives could deliver and take, no slowdown upon the second copy starting, or speedup as one finished.
 
Solution

swineadam

Commendable
Jun 21, 2018
52
3
1,565
With 4x SATA SSDs, there is no impact.

I just tested Copying from Drive A to B, and Drive Y to Z.
4x SATA III SSD, all connected to the native motherboard ports.

prox 40GB each transfer.
Each transfer happened at whatever the drives could deliver and take, no slowdown upon the second copy starting, or speedup as one finished.

HA! you mention native ports, and i just got done messaging another user on the forums who also mentioned those. i have an ASRock Z370 Pro4 motherboard, and have already checked through the manual, but don't seem to find info on this native SATA port thing.

at any rate, i too have 4 SSD's installed. however, doing such an action slows down both transfers. so, this has nothing to do with I/O's? maybe i am not using these native ports? maybe there is a bottle neck?


some system specs...
Windows 10 Home x64 (OEM)
i7-8700k CPU
GTX 1070 Ti GPU
2x8 GB RAM @ 2400MHz
650W PSU - model unknown

all my drives are hooked up to the onboard ports.
 

swineadam

Commendable
Jun 21, 2018
52
3
1,565
yeah, i just tried to copy a large file from a game on one or the other drive. while the transfer rate degradation was less severe than what i experienced trying to copy 2 large files to the same drive, it was still noticeable.

i always assumed this was normal and had something to do with I/O limitations.


EDIT: okay! so after looking more carefully through the mobo manual... i did find this...

These six SATA3 connectors support SATA data cables for internal storage devices with up to 6.0 Gb/s data transfer rate.* If M2_1 is occupied by a SATA-type M.2 device, SATA_5 will be disabled.* If M2_2 is occupied by a SATA-type M.2 device, SATA_0 will be disabled.

this leads me to believe that ports 1, 2, 3 & 4 are running native. OR, i have only two native ports being 0 & 5. hmm... i might have to contact ASRock to be certain about this. there doesn't seem to be any easy way to know which ports are native. ><
 
Last edited:
Only reason to Raid with SSDs if if you have some stupid crazy performance demand for video encoding records etc... and for Raid Disk Protection.

As for the file transfer aspect. There is many things that can factor into this that isnt I/O related. Such as the OS being used, where the file is being transferred to and from, the type of file, the size of the file etc...

You would use the exact same system specs on machine with a different OS such as Linux and file transfer speeds could be way different also...

Using file transfer is not really a solid method for determining I/O limitations.
 

USAFRet

Titan
Moderator
yeah, i just tried to copy a large file from a game on one or the other drive. while the transfer rate degradation was less severe than what i experienced trying to copy 2 large files to the same drive, it was still noticeable.

i always assumed this was normal and had something to do with I/O limitations.


EDIT: okay! so after looking more carefully through the mobo manual... i did find this...

These six SATA3 connectors support SATA data cables for internal storage devices with up to 6.0 Gb/s data transfer rate.* If M2_1 is occupied by a SATA-type M.2 device, SATA_5 will be disabled.* If M2_2 is occupied by a SATA-type M.2 device, SATA_0 will be disabled.

this leads me to believe that ports 1, 2, 3 & 4 are running native. OR, i have only two native ports being 0 & 5. hmm... i might have to contact ASRock to be certain about this. there doesn't seem to be any easy way to know which ports are native. ><
No, it is not the I.O situation.
Rather, it is almost certainly the composition of the files in question, and what else the system may be doing at that time.

And unless you also have an M.2 drive in one of those ports, that has no effect.
 

swineadam

Commendable
Jun 21, 2018
52
3
1,565
No, it is not the I.O situation.
Rather, it is almost certainly the composition of the files in question, and what else the system may be doing at that time.

And unless you also have an M.2 drive in one of those ports, that has no effect.


are you using a program to do your test? you said, "prox 40GB each transfer. ". after searching prox, i can only conclude you were using it as an expression. but i am incompetent at searching >_>

about the M.2 drive thing... yeah, i understand and just thought maybe what the manual said about that might be some indication as of to which ports are native. as far as that goes, looks like i will have to contact ASRock about it.
 

swineadam

Commendable
Jun 21, 2018
52
3
1,565
"prox" = 'Approximately'...;)

lol. well, that makes more sense than me thinking it was might have been an expression for speed. LOL DERP

alrighty! well, it seems using compressed/packed resources wasn't the best way to do that. i figured as long as it was just one big file, it would give me an accurate enough result. this is the first time my PC has had so many drives, and it was something that quickly became an interest.

after using ATTO Disk Benchmark, it didn't seem to have any issue benchmarking multiple drives and providing results consistent with individual drive benchmark results.

although... that isn't the same as copying files between drives. and i don't have any HDD's to see how the benchmark results compare with what is expected.
 
Last edited:

popatim

Titan
Moderator
Overall speed can depend alot on the actual drives being used. Read speed is usually not an issue but writes can be.
If the drives can't write at full speed over the entire drive then it will use a cache to hold data. You will write to the cache at high speed until it's full. Then you drop to the drive's native speed.

So given this, depending on which drives you have, even a compressed file will slow down if it's larger then the drive's cache.
 
  • Like
Reactions: swineadam

swineadam

Commendable
Jun 21, 2018
52
3
1,565
Overall speed can depend alot on the actual drives being used. Read speed is usually not an issue but writes can be.
If the drives can't write at full speed over the entire drive then it will use a cache to hold data. You will write to the cache at high speed until it's full. Then you drop to the drive's native speed.

So given this, depending on which drives you have, even a compressed file will slow down if it's larger then the drive's cache.

interesting. alright! well, i received a response from the user i messaged about the Native ports thing... from their explanation, i understand it should be pretty obvious if ports are native or not. as far as i can tell, all my ports are native.

if anyone is interested in the response they provided, i have sent out an invite. otherwise, just ignore it.

at any rate, i am going to mark post #6 as the best answer for this one. thank you very much for your time here everyone! greatly appreciated! i learned something too! <3