[SOLVED] iSCSI drive and extremely poor network transfer speeds ?

jeffhennen

Distinguished
Jul 14, 2014
41
0
18,530
0
Hello All,


Currently having issues with a synology nas and my connection to the iSCSI drive i have setup on it.


I have my workstation and then the synology nas. Everything was working perfectly with the Iscsi drive until today. Running into problems now with the Iscsi drive when accessing the information on the drive directly from my workstation where I have the Iscsi drive connected. Here are some observations that I have found below.

  • Just a few days ago when using this drive, I would get network speeds of 1Gigabit/s (1Gig/s network) this would equate about 800-900 MGb/s
  • When trying to access the information over the network through the iSCSI drive I will max out at about 25-30 MGb/s on the system network speeds
  • Doing transfer of data from Iscsi drive that is considered a mounted disk on workstation to a separate partition on the Synology Nas, this is transferring at the 1Gig/s speeds
  • Accessing other non-iSCSI drives aka the network drives from the same synology nas that the iSCSI drive is hosted on. This resulted in the 1 Gig/s speeds.
  • removed targets on the iSCSI initiator and then also on the iSCSI Manager and reconnected them. This did not change anything and still resulted in the slow 25-30MGb/s speeds.
I am not sure what else to do other than completely delete the drive on the synology nas and start from scratch at this point and see if it works correctly.


Would someone be able to chime in to hopefully diagnose this issue?
 
Dec 10, 2021
8
1
10
2
As I stated, the ISCSI drive is over the network and hosted on the Synology drive that has a separate partition for general storage and the ISCSI drive for specifically my steam games.

Like I stated before, when accessing the ISCSI drive data I am only able to access at 25-30 MGb/s while access all other data on the Synology NAS that I am access is able to access at the 800+MGb/s.

Also, using the ISCSI Drive, I am able to send data from the ISCSI Drive to the general Partition on the Synology NAS at full speed. Because of the way ISCSI is implemented (to my knowledge) requires data to be networked to my workstation and then back to the general partition on the NAS.

Because of this, unless you are able to really show me why a cable can be narrowed down, it isn't a problem because I am accessing the general Synology NAS Partitions that are not the ISCSI drive are able to be accessed at full speed through the same physical machine. This completely eliminates the cable as the problem in this situation and then to my knowledge also narrows it down to some setting/service within the Workstation ( I am thinking) that randomly got installed.

Also, ISCSI to my knowledge just use general ethernet cabling over your network. There is no standard to my knowledge specifically for a physical cable for ISCSI.
Today might be your lucky day.
I have been doing enterprise storage support, design, management etc. and have multiple Synology boxes.
DS1815, DS1817, DS1821. I just got the 21. The last two have 2 10Gb ports via add on card.

I don’t know what unit of measurement “MGb/s” is. Let’s just go with 10Gb/s would be 1GB/s. So 10 Gigabit would be 1 GigaByte. 1Gb would be 100MB or .1Gb/s.

So now that we have all that out of the way. What is your iSCSI client?

I have an ESX cluster using some of my stuff. The Synology iSCSI implementation was super crap a few years ago. I haven’t tried it again.

So here are a few gotchas.
Multipathing. You need it. I would set it up on separate physical ports on the array.
On mine I have 4 1G ports (I am going to exclude the 10G ports in this explanation.)

I have Nic1 and 2 in a bond. This is for my non storage network.
I have Nic3 and 4 on separate networks from the bond.
So the first is i have is say 10.0.0.100. (Bonded Nic1 and 2). 255.255.0.0 mask.
The 3rd and 4th are respectively 10.10.1.100 and 10.11.1.100.
You want to make sure that each physical NIC can only talk to its own unique port. You don’t want to allow every nic to talk to every port.
So on my server (iSCSI initiator) I have 2 storage NICs. One is say 10.10.0.1 and the other is 10.11.1.2.
If I were using 4 storage ports then I would have the first two as 10.10.1.100 10.11.1.100. The second 2 would be 10.10.2.100 and 10.11.2.100.
NIC 1 at 10.10.0.1 can only talk to 10.10.1.100 and 10.10.2.100.
NIC 2 at 10.11.0.2 can only talk to 10.11.1.100 and 10.11.2.100.
You can used Vlans as well but depending on how you have your nics teamed … how the IO is sent out it can cause problems. VLAN tagging etc. I don’t want to have to bother with all that so I use subnets. (VMware has quite a few guidelines depending on the storage and network and if its active active or if the storage side nics only have 1 up of you are using different trunking stuff etc. etc .bleh.)

So now you have 2 physical nics in your computer and 4 total ports for your storage.
Each NIC has 2x the aggregate bandwidth and if something bad happens… on a nic etc. They can’t cause issues with all of your storage ports. I one of your storage ports or host NICs had issues or starting throwing out all kind of garbage for one reason on the other and not a hard failure .. it wont be able to impact all of the ports just the ones it can talk to.

So you have all of that set up properly. Now you get to talk about multipathing.
How do you get it to send data down each path.
In ESX you have.
Round robin that pushes data down path and switches to the next etc.. The PSP policy in VMware determines how many IOPS to send down each path before switching to another.

MRU which I is most recently used .. which will only use 1 path until something fails .. then it will switch to another path and keep going so kinda like failover .. but then they stay where they are even if the other path comes back up.

Fixed… hey use this path.

In each of those you could get full bandwidth or not.

If you are sharing prod traffic with iSCSI traffic on the same physical interface then .. more fun things can happen even if you have all that set up properly.

So lets say you are copying a 20GB file from one directory to another. It’s going to be a file within the iSCSI LUN. So your host will see the LUN but you are probably but you are moving a file in it.
Host 1 has iscsi LUN 1 mounted to it and is the one you are sitting at.
Host 2 has iscsi LUN 2 mounted to it.
Host 1 wants to copy a file via SMB share called share1 that is mounted on host 2 to another directory.
Host 1 is moving the file.
File /share1/host2/dir1/bob.txt
To /share1/host2/dir2/bob.txt. Or to Host1 iscsi LUN 1.
So you move the file via SMB and you are using your 1Gb or 100MB/s.
Now you filled up write cache on the host on Host 2 and now Host 2 starts writing to the iSCSI LUN.

That 1Gb or 100MB/s now drops to 50MB/s because you are moving network traffic/SMB traffic over the same network as your iSCSI traffic.
If you are moving to LUN 1 on host 1 from the SMB share that is located on Host 2 LUN 2. You are multiplying that traffic.

That’s the easy stuff.
Now with all that said. The performance of Synology non iscsi is friggin awesome. It goes at full speed. Their iSCSI stuff doesnt.
And you have to know how you get space back. If you are using a thin provisioned LUN meaning you can set it as 1TB but it only allocates the storage as the host uses it.
(I will talk about thin provisioning stuff if you ask. I am going to get back to the other stuff.)

So an NFS share will go full speed too. Using ESX. NFS 4.1 supports multipathing as well.

Synology iSCSI will also sometimes lock an iSCSI LUN and it will become inaccessible for no reason then you get to ssh into the Synology box find a lock file and delete it before it will present it to the host properly.

If you are not using thin provisioned LUNs then 100% of the storage allocated to the iSCSI LUN you can only use on the host that it is mounted to. If you have a 1TB LUN with 1GB of data on it then you can’t use the 999GB of data for any other host. If its thin provisioned and you write 1GB you will only use 1GB. When you write 900GB then it will grow to 900GB. If you delete 800GB the array will still see 900GB used because the host wont actually go “Delete” data. It just deallocates it in its file table and says hey i can write over that again.

If you want to get that 800GB back you have to run space recovery process to tell the array .. hey these addresses aren’t being used anymore. ESX its called unmap. Windows sdelete. (Pretty sure that’s correct for windows. I know i am correct for ESX.)

Now one of the super cool things with iscsi is that in ESX it can use VAAI xcopy and in Windows ODX (Offloaded data transfer.) if you copy a file from one location to another on the same LUN and instead of moving stuff thru your network it will essentially calculate a token that describes the data and where you want to move it to and it will move it inside the array with 0 network utilization. So instead of moving the data over your network into your host then moving that data back out from that host to the new location the array just moves it inside itself. This is just for certain operations and your host has to support it.

Now you have to deal with other problems/optimizations.
Are you using jumbo frames? Do you have flow control turned on? Are hosts using DelAck? Etc. Etc.

As you can see .. i can talk about this stuff for a very long time. Providing additional information can help me tell you want is important to you. Use case information etc.

Hit me up sometime .. ill respond if i see it.
 

Ralston18

Titan
Moderator
Do you have other known working (at speed) iSCSI cables to swap in?

Not completely sure about the all of the physical network connections involved; however, if there is one common cable with respect to your observations that is the cable I would start with....
 

jeffhennen

Distinguished
Jul 14, 2014
41
0
18,530
0
Do you have other known working (at speed) iSCSI cables to swap in?

Not completely sure about the all of the physical network connections involved; however, if there is one common cable with respect to your observations that is the cable I would start with....

As I stated, the ISCSI drive is over the network and hosted on the Synology drive that has a separate partition for general storage and the ISCSI drive for specifically my steam games.

Like I stated before, when accessing the ISCSI drive data I am only able to access at 25-30 MGb/s while access all other data on the Synology NAS that I am access is able to access at the 800+MGb/s.

Also, using the ISCSI Drive, I am able to send data from the ISCSI Drive to the general Partition on the Synology NAS at full speed. Because of the way ISCSI is implemented (to my knowledge) requires data to be networked to my workstation and then back to the general partition on the NAS.

Because of this, unless you are able to really show me why a cable can be narrowed down, it isn't a problem because I am accessing the general Synology NAS Partitions that are not the ISCSI drive are able to be accessed at full speed through the same physical machine. This completely eliminates the cable as the problem in this situation and then to my knowledge also narrows it down to some setting/service within the Workstation ( I am thinking) that randomly got installed.

Also, ISCSI to my knowledge just use general ethernet cabling over your network. There is no standard to my knowledge specifically for a physical cable for ISCSI.
 
Last edited:

Ralston18

Titan
Moderator
Setting aside cable issues....

Where NAS and Storage are involved then it is probably worthwhile to move your post from Networking to Storage.

Very likely that those who follow that Category will be able to offer additional suggestions and ideas.
 

jeffhennen

Distinguished
Jul 14, 2014
41
0
18,530
0
Setting aside cable issues....

Where NAS and Storage are involved then it is probably worthwhile to move your post from Networking to Storage.

Very likely that those who follow that Category will be able to offer additional suggestions and ideas.

This is exactly where i had it and then someone moved it.....
 

Ralston18

Titan
Moderator
As near as I can determine the thread started in Apps and Software but was moved from there to Networking (certainly more applicable than Apps)

Then I moved the thread to Storage. I did not note the earlier move - apologies.

That said, I suggest leaving the post here in Storage for the time being.

Mainly because a Synology NAS is involved.

The thread can always be moved again or back as circumstances and responses warrant.

If there are no additional responses I will send a PM for assistance.
 

jeffhennen

Distinguished
Jul 14, 2014
41
0
18,530
0
As near as I can determine the thread started in Apps and Software but was moved from there to Networking (certainly more applicable than Apps)

Then I moved the thread to Storage. I did not note the earlier move - apologies.

That said, I suggest leaving the post here in Storage for the time being.

Mainly because a Synology NAS is involved.

The thread can always be moved again or back as circumstances and responses warrant.

If there are no additional responses I will send a PM for assistance.

Apologies, i guess i didn't realize that i put it there.
 
Dec 10, 2021
8
1
10
2
As I stated, the ISCSI drive is over the network and hosted on the Synology drive that has a separate partition for general storage and the ISCSI drive for specifically my steam games.

Like I stated before, when accessing the ISCSI drive data I am only able to access at 25-30 MGb/s while access all other data on the Synology NAS that I am access is able to access at the 800+MGb/s.

Also, using the ISCSI Drive, I am able to send data from the ISCSI Drive to the general Partition on the Synology NAS at full speed. Because of the way ISCSI is implemented (to my knowledge) requires data to be networked to my workstation and then back to the general partition on the NAS.

Because of this, unless you are able to really show me why a cable can be narrowed down, it isn't a problem because I am accessing the general Synology NAS Partitions that are not the ISCSI drive are able to be accessed at full speed through the same physical machine. This completely eliminates the cable as the problem in this situation and then to my knowledge also narrows it down to some setting/service within the Workstation ( I am thinking) that randomly got installed.

Also, ISCSI to my knowledge just use general ethernet cabling over your network. There is no standard to my knowledge specifically for a physical cable for ISCSI.
Today might be your lucky day.
I have been doing enterprise storage support, design, management etc. and have multiple Synology boxes.
DS1815, DS1817, DS1821. I just got the 21. The last two have 2 10Gb ports via add on card.

I don’t know what unit of measurement “MGb/s” is. Let’s just go with 10Gb/s would be 1GB/s. So 10 Gigabit would be 1 GigaByte. 1Gb would be 100MB or .1Gb/s.

So now that we have all that out of the way. What is your iSCSI client?

I have an ESX cluster using some of my stuff. The Synology iSCSI implementation was super crap a few years ago. I haven’t tried it again.

So here are a few gotchas.
Multipathing. You need it. I would set it up on separate physical ports on the array.
On mine I have 4 1G ports (I am going to exclude the 10G ports in this explanation.)

I have Nic1 and 2 in a bond. This is for my non storage network.
I have Nic3 and 4 on separate networks from the bond.
So the first is i have is say 10.0.0.100. (Bonded Nic1 and 2). 255.255.0.0 mask.
The 3rd and 4th are respectively 10.10.1.100 and 10.11.1.100.
You want to make sure that each physical NIC can only talk to its own unique port. You don’t want to allow every nic to talk to every port.
So on my server (iSCSI initiator) I have 2 storage NICs. One is say 10.10.0.1 and the other is 10.11.1.2.
If I were using 4 storage ports then I would have the first two as 10.10.1.100 10.11.1.100. The second 2 would be 10.10.2.100 and 10.11.2.100.
NIC 1 at 10.10.0.1 can only talk to 10.10.1.100 and 10.10.2.100.
NIC 2 at 10.11.0.2 can only talk to 10.11.1.100 and 10.11.2.100.
You can used Vlans as well but depending on how you have your nics teamed … how the IO is sent out it can cause problems. VLAN tagging etc. I don’t want to have to bother with all that so I use subnets. (VMware has quite a few guidelines depending on the storage and network and if its active active or if the storage side nics only have 1 up of you are using different trunking stuff etc. etc .bleh.)

So now you have 2 physical nics in your computer and 4 total ports for your storage.
Each NIC has 2x the aggregate bandwidth and if something bad happens… on a nic etc. They can’t cause issues with all of your storage ports. I one of your storage ports or host NICs had issues or starting throwing out all kind of garbage for one reason on the other and not a hard failure .. it wont be able to impact all of the ports just the ones it can talk to.

So you have all of that set up properly. Now you get to talk about multipathing.
How do you get it to send data down each path.
In ESX you have.
Round robin that pushes data down path and switches to the next etc.. The PSP policy in VMware determines how many IOPS to send down each path before switching to another.

MRU which I is most recently used .. which will only use 1 path until something fails .. then it will switch to another path and keep going so kinda like failover .. but then they stay where they are even if the other path comes back up.

Fixed… hey use this path.

In each of those you could get full bandwidth or not.

If you are sharing prod traffic with iSCSI traffic on the same physical interface then .. more fun things can happen even if you have all that set up properly.

So lets say you are copying a 20GB file from one directory to another. It’s going to be a file within the iSCSI LUN. So your host will see the LUN but you are probably but you are moving a file in it.
Host 1 has iscsi LUN 1 mounted to it and is the one you are sitting at.
Host 2 has iscsi LUN 2 mounted to it.
Host 1 wants to copy a file via SMB share called share1 that is mounted on host 2 to another directory.
Host 1 is moving the file.
File /share1/host2/dir1/bob.txt
To /share1/host2/dir2/bob.txt. Or to Host1 iscsi LUN 1.
So you move the file via SMB and you are using your 1Gb or 100MB/s.
Now you filled up write cache on the host on Host 2 and now Host 2 starts writing to the iSCSI LUN.

That 1Gb or 100MB/s now drops to 50MB/s because you are moving network traffic/SMB traffic over the same network as your iSCSI traffic.
If you are moving to LUN 1 on host 1 from the SMB share that is located on Host 2 LUN 2. You are multiplying that traffic.

That’s the easy stuff.
Now with all that said. The performance of Synology non iscsi is friggin awesome. It goes at full speed. Their iSCSI stuff doesnt.
And you have to know how you get space back. If you are using a thin provisioned LUN meaning you can set it as 1TB but it only allocates the storage as the host uses it.
(I will talk about thin provisioning stuff if you ask. I am going to get back to the other stuff.)

So an NFS share will go full speed too. Using ESX. NFS 4.1 supports multipathing as well.

Synology iSCSI will also sometimes lock an iSCSI LUN and it will become inaccessible for no reason then you get to ssh into the Synology box find a lock file and delete it before it will present it to the host properly.

If you are not using thin provisioned LUNs then 100% of the storage allocated to the iSCSI LUN you can only use on the host that it is mounted to. If you have a 1TB LUN with 1GB of data on it then you can’t use the 999GB of data for any other host. If its thin provisioned and you write 1GB you will only use 1GB. When you write 900GB then it will grow to 900GB. If you delete 800GB the array will still see 900GB used because the host wont actually go “Delete” data. It just deallocates it in its file table and says hey i can write over that again.

If you want to get that 800GB back you have to run space recovery process to tell the array .. hey these addresses aren’t being used anymore. ESX its called unmap. Windows sdelete. (Pretty sure that’s correct for windows. I know i am correct for ESX.)

Now one of the super cool things with iscsi is that in ESX it can use VAAI xcopy and in Windows ODX (Offloaded data transfer.) if you copy a file from one location to another on the same LUN and instead of moving stuff thru your network it will essentially calculate a token that describes the data and where you want to move it to and it will move it inside the array with 0 network utilization. So instead of moving the data over your network into your host then moving that data back out from that host to the new location the array just moves it inside itself. This is just for certain operations and your host has to support it.

Now you have to deal with other problems/optimizations.
Are you using jumbo frames? Do you have flow control turned on? Are hosts using DelAck? Etc. Etc.

As you can see .. i can talk about this stuff for a very long time. Providing additional information can help me tell you want is important to you. Use case information etc.

Hit me up sometime .. ill respond if i see it.
 

jeffhennen

Distinguished
Jul 14, 2014
41
0
18,530
0
Today might be your lucky day.
I have been doing enterprise storage support, design, management etc. and have multiple Synology boxes.
DS1815, DS1817, DS1821. I just got the 21. The last two have 2 10Gb ports via add on card.

I don’t know what unit of measurement “MGb/s” is. Let’s just go with 10Gb/s would be 1GB/s. So 10 Gigabit would be 1 GigaByte. 1Gb would be 100MB or .1Gb/s.

So now that we have all that out of the way. What is your iSCSI client?

I have an ESX cluster using some of my stuff. The Synology iSCSI implementation was super crap a few years ago. I haven’t tried it again.

So here are a few gotchas.
Multipathing. You need it. I would set it up on separate physical ports on the array.
On mine I have 4 1G ports (I am going to exclude the 10G ports in this explanation.)

I have Nic1 and 2 in a bond. This is for my non storage network.
I have Nic3 and 4 on separate networks from the bond.
So the first is i have is say 10.0.0.100. (Bonded Nic1 and 2). 255.255.0.0 mask.
The 3rd and 4th are respectively 10.10.1.100 and 10.11.1.100.
You want to make sure that each physical NIC can only talk to its own unique port. You don’t want to allow every nic to talk to every port.
So on my server (iSCSI initiator) I have 2 storage NICs. One is say 10.10.0.1 and the other is 10.11.1.2.
If I were using 4 storage ports then I would have the first two as 10.10.1.100 10.11.1.100. The second 2 would be 10.10.2.100 and 10.11.2.100.
NIC 1 at 10.10.0.1 can only talk to 10.10.1.100 and 10.10.2.100.
NIC 2 at 10.11.0.2 can only talk to 10.11.1.100 and 10.11.2.100.
You can used Vlans as well but depending on how you have your nics teamed … how the IO is sent out it can cause problems. VLAN tagging etc. I don’t want to have to bother with all that so I use subnets. (VMware has quite a few guidelines depending on the storage and network and if its active active or if the storage side nics only have 1 up of you are using different trunking stuff etc. etc .bleh.)

So now you have 2 physical nics in your computer and 4 total ports for your storage.
Each NIC has 2x the aggregate bandwidth and if something bad happens… on a nic etc. They can’t cause issues with all of your storage ports. I one of your storage ports or host NICs had issues or starting throwing out all kind of garbage for one reason on the other and not a hard failure .. it wont be able to impact all of the ports just the ones it can talk to.

So you have all of that set up properly. Now you get to talk about multipathing.
How do you get it to send data down each path.
In ESX you have.
Round robin that pushes data down path and switches to the next etc.. The PSP policy in VMware determines how many IOPS to send down each path before switching to another.

MRU which I is most recently used .. which will only use 1 path until something fails .. then it will switch to another path and keep going so kinda like failover .. but then they stay where they are even if the other path comes back up.

Fixed… hey use this path.

In each of those you could get full bandwidth or not.

If you are sharing prod traffic with iSCSI traffic on the same physical interface then .. more fun things can happen even if you have all that set up properly.

So lets say you are copying a 20GB file from one directory to another. It’s going to be a file within the iSCSI LUN. So your host will see the LUN but you are probably but you are moving a file in it.
Host 1 has iscsi LUN 1 mounted to it and is the one you are sitting at.
Host 2 has iscsi LUN 2 mounted to it.
Host 1 wants to copy a file via SMB share called share1 that is mounted on host 2 to another directory.
Host 1 is moving the file.
File /share1/host2/dir1/bob.txt
To /share1/host2/dir2/bob.txt. Or to Host1 iscsi LUN 1.
So you move the file via SMB and you are using your 1Gb or 100MB/s.
Now you filled up write cache on the host on Host 2 and now Host 2 starts writing to the iSCSI LUN.

That 1Gb or 100MB/s now drops to 50MB/s because you are moving network traffic/SMB traffic over the same network as your iSCSI traffic.
If you are moving to LUN 1 on host 1 from the SMB share that is located on Host 2 LUN 2. You are multiplying that traffic.

That’s the easy stuff.
Now with all that said. The performance of Synology non iscsi is friggin awesome. It goes at full speed. Their iSCSI stuff doesnt.
And you have to know how you get space back. If you are using a thin provisioned LUN meaning you can set it as 1TB but it only allocates the storage as the host uses it.
(I will talk about thin provisioning stuff if you ask. I am going to get back to the other stuff.)

So an NFS share will go full speed too. Using ESX. NFS 4.1 supports multipathing as well.

Synology iSCSI will also sometimes lock an iSCSI LUN and it will become inaccessible for no reason then you get to ssh into the Synology box find a lock file and delete it before it will present it to the host properly.

If you are not using thin provisioned LUNs then 100% of the storage allocated to the iSCSI LUN you can only use on the host that it is mounted to. If you have a 1TB LUN with 1GB of data on it then you can’t use the 999GB of data for any other host. If its thin provisioned and you write 1GB you will only use 1GB. When you write 900GB then it will grow to 900GB. If you delete 800GB the array will still see 900GB used because the host wont actually go “Delete” data. It just deallocates it in its file table and says hey i can write over that again.

If you want to get that 800GB back you have to run space recovery process to tell the array .. hey these addresses aren’t being used anymore. ESX its called unmap. Windows sdelete. (Pretty sure that’s correct for windows. I know i am correct for ESX.)

Now one of the super cool things with iscsi is that in ESX it can use VAAI xcopy and in Windows ODX (Offloaded data transfer.) if you copy a file from one location to another on the same LUN and instead of moving stuff thru your network it will essentially calculate a token that describes the data and where you want to move it to and it will move it inside the array with 0 network utilization. So instead of moving the data over your network into your host then moving that data back out from that host to the new location the array just moves it inside itself. This is just for certain operations and your host has to support it.

Now you have to deal with other problems/optimizations.
Are you using jumbo frames? Do you have flow control turned on? Are hosts using DelAck? Etc. Etc.

As you can see .. i can talk about this stuff for a very long time. Providing additional information can help me tell you want is important to you. Use case information etc.

Hit me up sometime .. ill respond if i see it.



Wow, I understand you know you stuff but this is extremely Verbose. So much information to try and keep track of. I am not really a proficient user of this. I Just watched some Youtube channels and found ISCSI and was something that I have been thinking about and wanting to do. What would you recommend taking a look at first and verifying. I just know small things in regards to these and really basically a noob in general in regards to IT stuff. Computer science major with little knowledge in the IT realm.
 
Dec 10, 2021
8
1
10
2
Wow, I understand you know you stuff but this is extremely Verbose. So much information to try and keep track of. I am not really a proficient user of this. I Just watched some Youtube channels and found ISCSI and was something that I have been thinking about and wanting to do. What would you recommend taking a look at first and verifying. I just know small things in regards to these and really basically a noob in general in regards to IT stuff. Computer science major with little knowledge in the IT realm.
What kind of machine are you using to connect to your iscsi targets? What OS? Windows, Linux, ESX? That would give me somewhere to start. :D
 

ASK THE COMMUNITY