beerorkid
Contributor
Contributor

migrate vm: Keep vm config files... option greyed out

Jump to solution

just installed 4 new ESX servers. They are in a separate cluster because of different CPU architectures. vmotion, ESX standard, and the VC agent are all licensed up.

When I try to migrate a vm that is powered off to the new cluster and it gets to the "select datastore" the option "keep virtual machine configuration files and virtual disks in their current locations: is greyed out. The option to move the files is selected. The files do not need to be moved to a new datastore.

I do get the option when moving to another cluster, so I am thinking it must be some setting on the new cluster we have set up. DRS and HA are not on the new cluster, but are enabled on the others. not sure how that would affect this issue, just mentioning.

Any suggestions?

Message was edited by:

beerorkid

0 Kudos
1 Solution

Accepted Solutions
esiebert7625
Immortal
Immortal

Is your VM located on local storage on the ESX server? If that is the case you must move it to another data store when migrating to another ESX server. If your VM is on the SAN you will have the option to keep it in the current location. Also are the LUNs presented to all the ESX servers with the same LUN numbers?

View solution in original post

0 Kudos
9 Replies
MR-T
Immortal
Immortal

Do each of the hosts have access to the same LUNS?

That's my guess.

esiebert7625
Immortal
Immortal

Is your VM located on local storage on the ESX server? If that is the case you must move it to another data store when migrating to another ESX server. If your VM is on the SAN you will have the option to keep it in the current location. Also are the LUNs presented to all the ESX servers with the same LUN numbers?

View solution in original post

0 Kudos
beerorkid
Contributor
Contributor

well we have one lun that most of our VM's reside on. the cluster I am trying to move from does have more lun's available to it than the cluster I am trying to move the VM to.

will check to see if that is the case.

0 Kudos
beerorkid
Contributor
Contributor

not local.

the LUN's are named the same, will check to see if the long name under /vmfs/volumes is the same

0 Kudos
MR-T
Immortal
Immortal

I'm sure you'll find this is the issue.

0 Kudos
esiebert7625
Immortal
Immortal

I concur, LUN presentation is usually the culprit

0 Kudos
beerorkid
Contributor
Contributor

wow

so the long names are different.

new cluster hosts I am having problems with:

4638e442-6755e9f4-793c-0019bb3831d8 VMDISK01 (1)

old cluster hosts

4542bdc1-e30a8072-8b2b-0002a54e8df5 VMDISK01

wonder how I change that? Iffin you could help me a bit on that one it would be appreciated.

0 Kudos
esiebert7625
Immortal
Immortal

If the name is different that means it sees it as a different vmfs volume, you'll need to start with LUN presentation. Check with your SAN admin and make sure each ESX server sees the same LUN numbers, ie. ESX Server1 should see LUN's 8,9,10,11 ESX Server2 should also see LUN's 8,9,10,11. Check your Configuration, Storage Adapters and see how your server sees the LUN's.

Also see these guides...

SAN Configuration Guide - http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf

SAN Conceptual and Design Basics - http://www.vmware.com/pdf/esx_san_cfg_technote.pdf

SAN System Design and Deployment Guide - http://www.vmware.com/pdf/vi3_san_design_deploy.pdf

0 Kudos
beerorkid
Contributor
Contributor

cool. Thanks so much for the help and the links. I will get it figured out tomorrow, reply with how it worked, and then assign the points to you folks appropriately.

Darn fine community around here.

0 Kudos