I have an error when I do Storage Vmotion in a VM. I make a Storage Vmotion in a VM between local storage(source) and shared storage(destination) and the SVmotion is correct except when in svmotion finishes in the summary tab appears that the VM files are in local storage and in shared storage. I do a browse of local storage and it doesn't appear any file.
After I make again a SVmotion between shared storage(source) and local storage(destination) and I can't make it because appear this error:
"A general system error ocurred: The virtual machine has virtual disk in link-cloned mode that prevents migration"
My shared storage is an AX100 of EMC.
Thank you in advanced.
I have the same issue on my Sun StorageTek 2540.
On the Source and Destination there are nothing special. And nothing in the VMX tells us about Linked Mode.
My guess is that vCenter is not refreshing the linked mode state.
I can report the same issue, the vm's are not linked clones or have linked clones made of them.
The vm's are Windows/Linux vm's, a reinstall of VCenter doesn't fix the issue neither does a reboot of the ESX host or the virtual machine.
We have a IBM DS4300 for our shared storage.
same issue here .( in my case shared storage is iscsi iet)
Here is a workaround that works for me:
Shutdown the vm, and then remove the VM from inventory and then add it to the inventory.
After that Vmotion works again. And shows in the VIC that it is connected to 1 datastore and not 2 what you see after a SVmotion
I also have the same problem. Another solution is to create a snapshot and then delete it. You can then svmotion the machine again. But after you do this it's stuck again and you have to make another snapshot and delete it.
This can take some time if you are moving everything to one SAN to reorgonize the first one and then move everything back as we have done ....
In an effort to relocate VMs between storage controllers, I've now run into this issue as well.
It's a limitation discussed in the 4.0 SDK, http://www.vmware.com/support/developer/vc-sdk/visdk400pubs/vsdk400knownissues.html.
Hopefully there will be a fix soon! And would it be too much to ask that this fix retroactively corrects VMs affected?
I am having this same issue after a failed SVmotion. My SVmotion was 30% complete after 12 hours on a 15GB drive. I cancelled the operation and now I can't start another SVmotion because of this error.
Here is a tip that can help you if you have a vm stuck in limbo with one vmdk file on one LUN and another where you started the SVmotion from. You should also have a few Dmotion-xxx.vmdk files around there.
I have testet this 6 times, and every time it worked. However i STRONGLY recommend that you back up all files belonging to the virtual machine before you proceed ***
Remove VM from inventory
Move all the files that have allready been moved back to the starting point
If asked, say you have moved it
Vmware will now commit all the other snapshot images in the same folde aswell (Dmotion files)
One time i was able to do this with the VM running, but the other 5 times i had to answer that i moved it, run steps 6 and 7 and then i could turn the VM back on.
This same bug occured on me today and the suggestion to create a snapshot and then remove it, works like a charm! Thanks "Stream2back" for that. Hope you get the credit for posting a pssible solution until a true bug fix is released.
None of these worked for me, but then that is typical for my luck - especially as this is Friday 13th
Anyway it turns out I had a special case so I'll document my working fix in case somebody else reading this has my kind of slightly different layout:
My problem arose when I SVMotioned just the C: drive volume of a Windows VM and left two other volumes where they were. Both these other volumes were in Independant Mode (persistent) because I didn't want snapshots taking of these drives when a general VM snapshot was taken as part of the main backup process.
I noticed after the move a considerable degradation in performance of the VM and discovered that the Dmotion... files were still present in the original location and looking at the settings of the VM the two disks that weren't moved were still referenced as being redirected to the DMotion... file.
As I said none of the snapshot recovery methods worked because the problem disks were in Independent Mode, so I powered down the VM, unchecked the Independent Mode setting on each disk and then performed the apply snapshot and delete snapshot method and this time on deleting the snapshot the Dmotion... file was rolled back into the main VM files and after a lengthy recovery (as the DMotion was over 6GB) the snapshot delete completed successfully and all the references to DMotion had been cleaned up.
The VM then started successfully and so far the performance issues seem to have gone. I can now go to the other two VMs showing this problem and perform the same, one of the DMotion files is over 23GB as it is a busy VM so it will be interesting.....
It appears that I/O was being redirected via the DMotion.... file and this was causing the poor performance, and the size of one of these files indicates it acts a snapshot file and records changes - therefore when anything happens on this VM this change log has to be referenced. I would imagine that in time the VM would just have ground to a halt, so it is important that this is monitored as it could become a serious issue.
My advice therefore is that if you have a single disk VM that can be snapshotted OK then use SVMotion as I have never had an issue with it, but once you go multi disk then think very carefully whether an offline migration wouldn't be a better idea.
All of this was performed on VSphere with the latest patches, so it appears the bug fix isn't here yet!
It seems to me that a solution to this is just clone out to a new VM, this way it will collect all the spurious disk files inot one place, or if downtime is an issue treat the VM as a physical box and do a P2V. (I have found the last way good in the past for when you want to chnage VM's with RDM disks to VM's with VMDK's etc..
I have the same situation and what worked for me was to power down the VM and then do a SVMotion. I really don't like the idea of powering down the machine, but because of desparation I tried it. Hope it works for someone else too.