Environment:
3 ESXi Hosts (2 4.1, 1 5.0)
vCenter 5
2 NFS Datastores on single NAS (2 independent RAID arrays)
Just brought up the second NFS datastore and chose a powered down VM to test migration. Used wizard to select new NFS datastore.
Simple 80GB VM, however has been running for almost 2 hours and during the entire process, VM's on the same host and on first datastore keep going red with Total Disk Latency issues...making those VM's un-useable.
Other VM's on same datastore, but on different hosts seem unaffected.
How can I improve the datastore migration process without completely killing every VM on that host?
Thanks!
Hi,
Whats your connectivity to your NAS like bandwidth wise?
NAS has dual 1GB nic's...load balanced so one IP address for NAS.
Host that I did the test migration on, networking looks as attached. Migration finally finished after 2 hours.
ilyo wrote:
Other VM's on same datastore, but on different hosts seem unaffected.
How can I improve the datastore migration process without completely killing every VM on that host?
Can you see if you have high host CPU usage during the Storage vMotion? How is the network usage at the same time?
The natural cause should be the physical disk spindles, but if other VMs on other host are ok then it might be not.
