Hello everyone!
We have been working to test out Windows Server 2012 a lot lately on a couple servers at my office. One of the first things we were testing is the new ability to host a VHD or VHDX file from an SMB3.0 file share on another Windows Server 2012 or Windows 8 computer. It took a little bit of work at first getting things to communicate properly for this (I ended up having to mount the network VHD file in the server, take it offline, and then attach it in the virtual machine settings.) However, to my amazement the performance was quite horrid compared to what others seemed to be describing.
Now, these servers are the best, but they're pretty decent systems. The first server is a Supermicro with dual 2.0 Ghz eight-core opteron processors with 16 GB of RAM and four Intel gigabit network ports. I have Server 2012 Datacenter installed on a 450GB 15k SAS drive connected to an Adaptec 6805 RAID controller on this server.
The second server is an HP DL360E G8 with a 1.8 Ghz quad-core processor, 12 GB of RAM, quad-port gigabit network adapter, and Server 2012 Datacenter installed on a 1 TB 7k SAS drive connected to an HP P410/512MB SmartArray RAID controller on this server. There is also another 450 GB 15k SAS drive in this server which we are using to store and share out the VHDX files on the network.
On both servers, the hard drives in HDTune registered an average of between 140 MB/s and 170 MB/s for throughput, just as I would expect from those kind of drives. When transferring files from one server to another server directly, it registered about 110 MB/s throughout the transfer, which pretty much maxes out the gigabit connection. Going one step further, we created one VHDX file and created a Windows Server 2012 VM on the local machine of each server. Again we ran HDTune within each server and the performance registered almost identical to what we got on the physical host server, between 140 and 170 MB/s. However, when we had that VHDX file stored on the network share on the separate server (again, on its own 450 GB 15k SAS drive) we only got about 30 MB/s throughput.
Since that performance was quite a let down compared to what we expected, I set up iSCSI on the HP server and created a iSCSI virtual hard drive file on the 450 GB 15k SAS drive, and mounted that target to the first virtual machine on the other server. We set up one port on each server as the iSCSI network and had a Cat6 crossover cable connected directly between the two ports, no switch or anything, and had jumbo frame packets enabled on both interfaces. Even with this configuration we only managed to get about 40 MB/s of throughput.
So what are we missing here? Is there something crucial I am missing, or should I just expect this massive decrease in performance for running a VHD through a network connection? Supposedly Microsoft made some major changes to SMB3 to help improve throughput for VHD files across network shares but this is definitely not great performance.
Any help or advice that you might have would be greatly appreciated. Right now this is just our testing lab environment, but I would expect greater performance on both of these scenarios that what we seem to be recording.
We have been working to test out Windows Server 2012 a lot lately on a couple servers at my office. One of the first things we were testing is the new ability to host a VHD or VHDX file from an SMB3.0 file share on another Windows Server 2012 or Windows 8 computer. It took a little bit of work at first getting things to communicate properly for this (I ended up having to mount the network VHD file in the server, take it offline, and then attach it in the virtual machine settings.) However, to my amazement the performance was quite horrid compared to what others seemed to be describing.
Now, these servers are the best, but they're pretty decent systems. The first server is a Supermicro with dual 2.0 Ghz eight-core opteron processors with 16 GB of RAM and four Intel gigabit network ports. I have Server 2012 Datacenter installed on a 450GB 15k SAS drive connected to an Adaptec 6805 RAID controller on this server.
The second server is an HP DL360E G8 with a 1.8 Ghz quad-core processor, 12 GB of RAM, quad-port gigabit network adapter, and Server 2012 Datacenter installed on a 1 TB 7k SAS drive connected to an HP P410/512MB SmartArray RAID controller on this server. There is also another 450 GB 15k SAS drive in this server which we are using to store and share out the VHDX files on the network.
On both servers, the hard drives in HDTune registered an average of between 140 MB/s and 170 MB/s for throughput, just as I would expect from those kind of drives. When transferring files from one server to another server directly, it registered about 110 MB/s throughout the transfer, which pretty much maxes out the gigabit connection. Going one step further, we created one VHDX file and created a Windows Server 2012 VM on the local machine of each server. Again we ran HDTune within each server and the performance registered almost identical to what we got on the physical host server, between 140 and 170 MB/s. However, when we had that VHDX file stored on the network share on the separate server (again, on its own 450 GB 15k SAS drive) we only got about 30 MB/s throughput.
Since that performance was quite a let down compared to what we expected, I set up iSCSI on the HP server and created a iSCSI virtual hard drive file on the 450 GB 15k SAS drive, and mounted that target to the first virtual machine on the other server. We set up one port on each server as the iSCSI network and had a Cat6 crossover cable connected directly between the two ports, no switch or anything, and had jumbo frame packets enabled on both interfaces. Even with this configuration we only managed to get about 40 MB/s of throughput.
So what are we missing here? Is there something crucial I am missing, or should I just expect this massive decrease in performance for running a VHD through a network connection? Supposedly Microsoft made some major changes to SMB3 to help improve throughput for VHD files across network shares but this is definitely not great performance.
Any help or advice that you might have would be greatly appreciated. Right now this is just our testing lab environment, but I would expect greater performance on both of these scenarios that what we seem to be recording.