i'm moving a vm from one lun to another using vmotion.
the vm is 16gb and 40gb for disk.
it migrates to 100% then fails with "a general system error has occurred: internal error"
what logs can i view to see why it is failing?
i ran other vmotions cold to that lun and they went fine. there is certainly enough disk available.. and it isn't a block size issue.
right now i'm just copying the entire folder via veemscp. great app.. any thoughts?
Do you have some particular device mounted on vm (CD-ROM or floppy)?
Can source and target ESX see the two LUNs?
Did you get any warning in the migration wizard?
nothing was attached as far as cd or floppy.
the source and target esx was the same and it can see both luns.
no warnings in the wizard.
i was able to run the migration successfully by first moving the config and C drive.. then doing a second migration to move the D drive. all to the same lun.. very odd.
maybe this is a new feature of 2.0.2 of VC?
my esx hosts were obviously at 301 at the time.. fully patched
Hello,
Since you are doing a cold migration (I.e. the VM is not running), it is not using vMotion. Which means everything travels over the service console.
Attachment of the Floppy or CDROM should not make a difference in this mode, so it is a difference in the way data is being transfered. Since you split up the disks to do the migration, I assume you are not using the VIC, but instead doing this by hand?
How did you do your 'migrations'? That will help in trying to determine what is happening. I have moved multi Vmdk VMs using the VIC with no issues, by hand with no issues, so perhaps it was your method? It sounds like you were using the VIC and are you sure it was a Migration and not a Clone?
Best regards,
Edward
Message was edited by:
Texiwill
more likely a new bug!
😆
normal scenario:
(powered off vm)
VC
select migrate to new host
choose the same host the vm resided on
select a new lun
select to move all disks and config files
destination LUN has plenty of space for the vm. The vm failed
scenario that worked:
(powered off vm)
VC
select migrate to new host
choose the same host the vm resided on
choose advanced mode for LUN location
re-point only the first Disk and Config files to the new lun
migration was successful, now repeat for the remaining disks.
so that was my scenario.. it is possibly just a bug.
Hello,
Certainly sounds like it is... I would contact your vmware support agent and open a case. Can you duplicate this scenario with another VM or only that one?
Best regards,
Edward
Are you sure that C and D disks were on the same LUN?
If so this could be only a bug.
looks like a bug.. did a collection of other vms.. in another cluster connected to a completely different network/san/etc.
some of the vms migrated cold no problems.. but got another that i had to do the C and D separately.
i'm not seeing any consistency yet between the 2 vms that failed. c and d are definitely both on the same LUN.
I've been working with tech support and they have managed to reproduce this at their end. They're raising a problem record with engineering to try and resolve the issue.
At the moment, i've got several VMs that consistently fail to migrate, so i'll have to move them from the console.
Regards
Daniel
nice that is dandy!
you have 2.0.2 too correct?
have you tried with 302? perhaps it fixes this issue.
i have 1 host at 302 plan to upgrade another this weekend... so i'll try some cold migrations then
We've got VC 2.0.2, unfortunately we're still running ESX 3.0.1 42829 as one of our VMs is constantly freezing when we run it, possibly a NUMA issue. We've decided and agreed with Tech Support that until we can resolve that issue, we are going to freeze any patching or upgrading of our ESX hosts. This way we can keep the environment at a constant level to help troubleshooting.
Hope you have more luck with 3.0.2
Regards
Daniel
what hardware are you running?
i'm fully patched on esx301 running on dell 6850s
We're running DL585G1s and DL585G2s with 4 dual core Opteron and 32GB RAM (G1) and 64GB (G2), attached to SAN with 4GB fibre links.
Seeing the same problem -
migrates to 100% then fails with "a general system error has occurred: internal error"
ESX 3.0.2, VC 2.0.2 on SUNFire x4100 M2, migrating cold (powered off) guest from SAN LUN to local disk.
When manually migrating these I got an error as well:
1. copied the entire guest vm folder from /vmfs/volumes/MY_SAN_VOL/ to /vmfs/volumes/MY_LOCAL_VOL/
2. removed the MY_HOST on MY_SAN_VOL from inventory
3. registered the copied VM on MY_LOCAL_VOL:
vmware-cmd -s register /vmfs/volumes/MY_LOCAL_VOL/MY_HOST/MY_HOST.vmx
When I tried to power up, I get "The request refers to an object that no longer exists or never existed."
I fixed this by removing (commenting out)
#sched.swap.derivedName = "/vmfs/volumes/SOME_LONG_ID_#_FOR_MY_SAN_VOL/MY_HOST/MY_HOST-054cecc0.vswp"
from the .vmx file.
SOME_LONG_ID_#_FOR_MY_SAN_VOL is what MY_SAN_VOL symlinks to.
The symlink was fine, but the .vswp file did not exists.
The VM then starts up fine.
Makes me wonder if I can remove that line and then cold-migrate (via VC wizard.)
yeah that fixed it.
1. check your .vmx file (/vmfs/volumes/[i]MY_SAN_VOL[/i]/[i]MY_HOST.[/i]vmx)
2. See if you have a line (at the bottom) defining "sched.swap.derivedName"
3. See if the file it points to exists. If not, comment out that line.
4. cold-migration (from VC wizard) should work now.
Best reference I could find is:
Optionally if you want your vswp file to stay in the VMs directory add the following line to the VMX file: sched.swap.dir = "/vmfs/volumes/VM-Volume1/MyVM/" (You do not need to worry about updating the existing sched.swap.derivedName parameter, it is generated by the VM and written to the config file each time the VM powers on.)[/i]
\[http://vmware-land.com/Vmware_Tips.html]
\- Christian
I was also getting "the request refers to an object that no longer exists or has never existed" after manual copying and registering vms. A "service mgmt-vmware restart" fixed that.
I was having this same error message on a recently upgraded 3.01 - to - 3.02 ESX node today. I did the "service mgmt-vmware restart" and it went away. I had to do it on the DESTINATION server for the Vcenter action (cloning, migrating, etc).
thanks for the help!
Hi
Just heard back from Tech Support, the cold migration issue only exists between VC 2.0.2 and ESX 3.0.1, upgrading to 3.0.2 appears to resolve the issue. We're upgrading our ESXs tomorrow, and will provide an update later
Regards