VMware Cloud Community
andreaspa
Hot Shot
Hot Shot
Jump to solution

Problems with vMotion

Hi,

I just got myself a couple of new ESX hosts, and wanted to migrate VMs from the old hosts to the new ones.

They don't share any storage, so the thought was to do non-shared vmotion. I migrated a couple of VMs without any issues.

So far so good, no issues and good performance (120MB/S on a 1Gbit link), no downtime needed for customer VMs etc.

Then I hit a brick wall, a couple of VMs won't accept being migrated online, and fails with these errors:

The virtual machine requires hardware features that are unsupported or disabled on the target host:

* General incompatibilities

If possible, use a cluster with Enhanced vMotion Compatibility (EVC) enabled; see KB article 1003212.

CPUID details: incompatibility at level 0x1 register 'ecx'.

Host bits: 0111:0110:1101:1000:0011:0010:0000:0011

Required:  x001:x11x:10x1:1xx0:xx10:xx1x:xx0x:xx11

The odd thing here is that it is only some VMs that experience this issue. I know for a fact that some of them was created and started in the old cluster, and some works while some doesnt. I haven't been able to narrow it down on different OS levels (two machines have the same OS where one work and one doesn't). Most VMs are Windows VMs.

Naturally, I did run the CPUID test ISO that VMware has, and I got this result:

2015-07-02 15-49-01.png

The old ESX host on the left, the new one on the right.

Does anyone have any suggestions on how I should proceed to get the vmotions sucessfully completed? Any tips for BIOS settings to check for?

What really makes it strange is that some VMs migrate just fine, while some just plain refuse.. I suspect it may be some setting deep inside the .vmx-file, so that's my next step to investigate. I need to have this done without rebooting or shutting down customers VMs. Smiley Sad

Version info:

vCenter 5.5.0 build 2646482

Old ESX: 5.1 build 1312873 (HP Proliant G7 DL360)

New ESX: 5.5 build 2718055 (HP Proliant Gen9 BL460c)

1 Solution

Accepted Solutions
sneddo
Hot Shot
Hot Shot
Jump to solution

It looks like you either have EVC mode enabled on these VMs (at some point) or you have a custom CPUID Mask set. Either way you will require an outage to change this.

The other option would be to set EVC mode on your new cluster to a lower level, but you would need to shutdown all VMs on this cluster. Lesson for next time you do this I guess...

View solution in original post

Reply
0 Kudos
4 Replies
UmeshAhuja
Commander
Commander
Jump to solution

Hi,

Its look like VM's having different hardware version or CPU capabilities of the two hosts are not the same.

To resolve this issue you must place either the source or destination host in an EVC-enabled cluster. If you cannot place the source or destination host in an EVC-enabled cluster,

OR

You may be able to resolve this issue by upgrading the virtual machine hardware version

I KNOW : In both the possibilities you  required the downtime, but no other option available look like

Thanks n Regards
Umesh Ahuja

If your query resolved then please consider awarding points by correct or helpful marking.
sneddo
Hot Shot
Hot Shot
Jump to solution

It looks like you either have EVC mode enabled on these VMs (at some point) or you have a custom CPUID Mask set. Either way you will require an outage to change this.

The other option would be to set EVC mode on your new cluster to a lower level, but you would need to shutdown all VMs on this cluster. Lesson for next time you do this I guess...

Reply
0 Kudos
andreaspa
Hot Shot
Hot Shot
Jump to solution

Thanks for the reply.

I did check the VM hardware versions, they range from 7 to 9, and so did the ones I migrated successfully. So no luck there.

From what I can tell, the machines are configured exactly the same, so time to deep dive into the .VMX-files and see if there is any difference.

As a side note, from the old servers I could vMotion the VM to a server that is one revision newer, so from X5650 to E5-2680 seemed to go fine, but X5650 to E5-2650v3 didn't.

This seems like a case for VMware to pinpoint where the problem could be, since it is not making any sense at the moment. Smiley Happy

Reply
0 Kudos
andreaspa
Hot Shot
Hot Shot
Jump to solution

Here's an update.

I actually managed to do this online, with the help from EVC.

This is how I solved it (with some help from VMware support):

1) Create a new cluster called "migration", and enable EVC with Westmere support.

2) Put one of the Gen9 servers in this cluster

3) vMotion from the G7 servers to the "migration" cluster now went successfully

4) vMotion from the migration cluster to the real cluster also went fine

Conclusion is that I have to "bounce" my VMs via the migration cluster, but since I can do this online for all VMs, I'm a happy camper again.

Thanks for the tips guys!

Reply
0 Kudos