Hi everyone,
I recently moved into a fully virtualized solution for my network, running 3x ESXi 5 servers on our GigE network.
Firstly, I am using all free software from VMware, so vCentre etc. is out of the question.
I am experiencing significant issues when trying to copy files between:
The issue is that whenever I am transferring between Hosts I am hit with horrible network performance. Even though I am using GigE adapters & switches between my hosts I will receive no more than 12MBps actual transfer speeds (100mbps connect?).
On the flip side, if I use virtual guests on the exact same host I can happily transfer data between my NAS (NFS) servers with speeds between 60-150MBps~ (1000mbps).
Examples:
ESXi_1 datastore copy FROM NFS share transfer rate of 10MBps~
Server2008 (guest, located on ESXi_1) copy FROM NFS transfer rate of 75MBps~
Server2011 SBS (guest, located on ESXi_2) copy FROM Windows Share 2008(ESXi_1) transfer speed 120MBps~
ESXi_2 datastore copy FROM NFS share transfer rate 5MBps~
Attached (wtf.png) example:
I am copying a file located on our NFS share first, it is a 3GB file copying from within our guest OS (2008), it is indicated in blue – Peak speed of 80MBps
I copy the exact same file from NFS share to the host machine (datastore1), this is indicated in red – Peak speed of 7.5MBps
The third (green) transfer/s is between windows guests on different hosts, transferring a 6GB file and a copy of 360MB file – both transfers happily sending at GigE speeds peaking around 40MBps
I can mirror these results across all three servers without any changes. Copies between guests run quite well (not exactly 100% GigE utilization) but the moment I try do anything to the datastore it will simply choke at around the speeds of a 100MBps connection.
The final image (wot.png) confuses me beyond belief. Allow me to try and explain what is happening here:
Why on earth can my guest direct download a file on a datastore connected to NFS at GigE speeds yet the exact same host cannot go anywhere near matching those speeds?
As stated in the title the issue appears to happen whenever I am using ESXi datastore browser on the host using vSphere Client OR between ESXi datastore browser to/from an NFS share.
Does anyone know what could be going on?
Does the ESXi 5 have some sort of restriction on the file transfers between a host and NFS shares? Where is this bottleneck coming from?
Thanks in advance for any ideas/help you guys can throw my way.
P.S. Sorry for wall of text, I really wanted to give as much info. as possible.
What kind of storage do you have locally on the host?
I've seen this issue with write-through controllers: http://wahlnetwork.com/2011/07/20/solving-slow-write-speeds-when-using-local-storage-on-a-vsphere-ho...
What kind of storage do you have locally on the host?
I've seen this issue with write-through controllers: http://wahlnetwork.com/2011/07/20/solving-slow-write-speeds-when-using-local-storage-on-a-vsphere-ho...
Hi Chris,
thanks for your reply.
That sounds exactly like my issue. I will have a look at the controllers we are using I do believe they are write-though indeed.
I am going to go have a look at my options and get back to you ASAP. With my question updated. Thanks a lot in advance.
Hi Chris,
Just coming back for an update. It was the controllers we ended up purchasing a raid controller with caching and BBWC module on one server to give it a whirl, the performance increase was beautiful we went from around 5MBps to 120-200MBps in some transfers.
Thank you greatly for your help.
Regards,
Mitch.