I have vmotion up and running. However I am disappointed in the performance. A very small vm (2 vcpu and 4GB of ram) that runs just our anti-virus software takes about 45 seconds to migrate. I figure if I needed to migrate all the vm’s off a host for some serious reason it would take far too long before the host was dead.
I am using one of the iSCSI vmkernel ports on our SAN for vmotion. This port is using jumbo packets and is limited to 1GB. I believe the poor vmotion performance is due to a few issues:
I believe I need to physically separate the vmotion network. The big questions is does this new network need to be running at 1GB or 10GB to increase performance. Of course 10GBase-T comes at a large cost but do I need to spend big dollars on:
If I need a physically separate vmotion network I guess it will need its own dedicated IP scheme. We have 10.0.0.x for our devices network and 192.168.130.x for our SAN so we could use 192.168.140.x for vmotion
Before I start spending a bunch of money I wanted to get others that may be more knowledgeable with vmotion them myself.
Thanks for your time and help. I look forward to your comments and suggestions.
vMotion can easily saturate NICs, so you shouldn't run vMotion on the same NICs as the storage traffic. Anyway, other than purchasing a 10GBit/s infrastructure, you could also add e.g. an additional 4 port NIC to each host and configure Multi-NIC vMotion, which will speed up the migration by ~4x (up to 16 NICs are supported).