computerguy72 :
Derek... A lot of that speed never actually happens. So many factors that effect it tremendously. The QD alone can trim that speed like 97%, among many other things... Massive amounts of running room to achieve real world performance. A little disturbing how so many think the raw bandwidth is anywhere near what you really get on average.
I understand the difference between IOPs and theoretical maximum bandwidth/throughput.
A 9 terabyte backup should be enough to qualify as "massive amounts of running room".
I'm not really sure what you mean, but 9 terabytes is alot lol.
I could see if i was transferring a 40 megabyte file with gigabit and you were trying to compare it to 10 gigabit, you wouldn't notice the difference due to the size of the file being smaller than the transfer speed of the slowest connection be tested.
It would appear instant on both.
My nas4free server is currently more than capable of saturating a gigabit connection (125 megabytes a second) for both reads and writes.
With the right hardware it is fairly easy to saturate a gigabit connection, in fact if I was reading data that was in my nas4free's arc cache and writing it to another computer it would saturate a 10 gigabit connection assuming the destination storage was up to par, at least a gigabyte per second of writes, which a Samsung 960 Evo and Pro are both easily capable of doing.
Increased queue depth increases speeds, not reduces and this only really applies to random workloads.
http://www.tomshardware.com/reviews/samsung-960-pro-ssd-review,4774-2.html
In any case, since my backups are sequential writes queue depth doesn't really matter.