Hello,
i got an problem with storage vmotion in a costumer environment. He upgraded to Enterprise and we used storage vmotion to change some SAN volumes.
For the SAN changes we cleaned up the raid that we want to change and moved all VMs (~15) with storage to different VMFS pools.
We speed was quiet ok, except 1-2 machines which took very long (100gb ~2h).
After recreating the SAN volume, i put it back to the esx server (2MB Block Size) and started to move back the vm's via storage vmotion.
But now, the svm took quiet long. (20gb ~2h). The first 20% are ok, but then pre process stucks on "Migrating the active state of the Virtual Machine".
I checked the HDD performance in a VM which is already located on the new storage. HDtune reported ~120MB/s (RAID5).
Check your datastore load level - commands per second, not megabytes.
---
MCSA, MCTS, VCP, VMware vExpert '2009
If the slow storage vmotion is not reported on the same VMs that had issues (1-2 VMs as you have mentioned) then the change of block size on the new SAN volume could also be a cause of the delay.
If the svmotion is slow on the same 1-2 VMs that you had issues migrating in the first exercise (one way) then it is to do with the number of outstanding commands and the read/write IO from the slow svmotion reporting 1-2 VMs.
Hi,
why would a change of the block size make a performance difference
Itzik
The high I/O stress in Guest OS will make the storage vMotion time longer.
How is the problem now? Dit it get resolved?
There will a performance impact (sometimes negligible) with the block size change but in the case we are discussing it looks like the I/O stress.
If the problem is still there, it will be helpful to analyse if you could capture the vmkernel logs from the time the SVMotion starts and till the time it ends. The logs should give us a fair idea of what is happening in the system.