Hi all, i'm trying to do a storage vMotion of a live VMs from an IBM V7K to the IBM SVC controller which is controlling the same V7K that we are trying to move the VM from. I hope it makes sense. Basically we are putting the SVC infront of the IBM V7K to control multiple V7K for redundancy. We are moving all of our VMs from the standalone V7K to the SVC that will control the same V7K.
When I vMotion a live VM it errors out at 32% with the following error A general system error occured: The source detected that the destination failed to resume. Error stack: Timed out aiting for migration data. It doesn't happen when the VM is powered down and we do a cold migration. Also it works when we first storage vMotion to an EMC SAN then back to the IBM SVC.
I will continue to research but any help will be greatly appreciated.
One question... the LUN from this source datastore is mapped to the SVC or only for the ESXi hosts ?
We have had this issue when svMotioning around a few VMs on a very busy NetApp System.
If you vmware.log contains this errors:
vmkernel: 114:03:25:51.489 cpu0:4100)WARNING: FSR: 690: 1313159068180024 S: Maximum switchover time (100 seconds) reached. Failing migration; VM should resume on source.
vmkernel: 114:03:25:51.489 cpu2:10561)WARNING: FSR: 3281: 1313159068180024 😧 The migration exceeded the maximum switchover time of 100 second(s). ESX has preemptively failed the migration to allow the VM to continue running on the source host.
vmkernel: 114:03:25:51.489 cpu2:10561)WARNING: Migrate: 296: 1313159068180024 😧 Failed: Maximum switchover time for migration exceeded(0xbad0109) @0x41800f61cee2
Have a look at this link to increase the timeout: