I have been doing a bit of vMotion testing for a client and I have noticed some odd results in my testing which I would like to ask the wider community about.
Dedicated vMotion NIC (10Gbit)
Windows 2012 Server, idle (CPU usage <2%)
vMotion within a latency free LAN with no packet-loss.
When migrating a VM via vMotion in an ideal LAN environment my migration times are all over the place. By all over the place I mean with a level of variance in the 1-1.5min range. Which is very strange to me, as I would expect the transfers to take around the same time. Give or take as I know a vMotion migration is firstly copying then migrating the machine, so there will be some variance with regards to the number of blocks that change while the migration is taking place, each time I move it however I wouldn't have thought it this would have resulted in such a large time swing.
With the average throughput being 90-100MB/s (MegaBytes)
vMotion with 10-20 mill seconds of latency
Comparing that to when 10-20 mill seconds of latency is injected, and the situation gets stranger.
As soon as I inject 10-20 mill seconds worth of delay into the environment, the time it takes to migrate the machine become very constant and the level of throughput actually goes up. So with no latency the throughput is around 90-100 MB/s but with 10-20 mill seconds of latency the throughput is constantly around 117-118MB/s.
Does anyone have a good explanation as to why when there is latency within the environment that vMotion is able to drive data throughput harder?
Accelerate your vMotion and vSphere Replications over the WAN with Bridgeworks - www.4bridgeworks.com