VMware Cloud Community
Cuthbert01
Enthusiast
Enthusiast

Unable to live vMotion between 2 hosts in EVC enabled cluster

I apologize if this is long, but I want to provide as much info as possible.

The question is a simple one though.

Background:

vSphere 6.5 latest update.

2 hosts, identical HP hardware with identical processors.

I had to do a firmware update and some other steps to resolve an iLO conflict.

Powered off VMs, put Host1 in maintenance mode and performed the iLO steps.

Host came back up with iLO error fixed. I then patched host using update manager.

That went fine and host came up clean and compliant with the patch baseline.

Shut down the VMs on Host2 and entered maintenance mode.

Performed the iLO steps on it and it came back up with the iLO error fixed.

In vCenter, the host showed as disconnected and I could not reconnect it.

Searching the Interweb I found that, because one host had the latest patches, which include the Spectre patches and the second host didn't, that was causing the issue.

I moved Host2 out of the cluster and was able to reconnect it and apply the patches.

It came up clean with no issues.

I was then able to move it back into the cluster.

I started all VMs and they came up fine. I set DRS back to fully automated and waited for it to balance things out.

I noticed that Host2 had a memory usage warning.

I tried to vMotion a VM to Host1 and got an error that "The target Host does not support the Virtual machines hardware requirements"

It suggested that I use a cluster with EVC enabled.

EVC mode on VMs on both hosts are the same "Sandy Bridge"

I can live vMotion from Host1 to Host2 with no issue.

I did NOT reboot Host2 after putting it back in the cluster.

Cluster EVC was already set before I became involved.

Question:

Will shutting down the VMs on Host2 and rebooting it help with the issue?

0 Kudos
14 Replies
a_p_
Leadership
Leadership

I ran into such an issue too some time ago. For whatever reason one of the hosts did not properly apply the EVC mode, and all VMs that were powered on on this host could not be moved.

Rebooting that host resolved the issue for me.

André

0 Kudos
Cuthbert01
Enthusiast
Enthusiast

Thanks André!

I'll try to remember to ping back and let you know.

0 Kudos
Cuthbert01
Enthusiast
Enthusiast

Hey André,

Unfortunately, the reboot didn't help.

I'll keep digging.

Thanks again

0 Kudos
sjesse
Leadership
Leadership

You did the spectre patches, but did you do the microcode updates too, they show up under the non critical patches.

0 Kudos
redluna31
Enthusiast
Enthusiast

Hello, Same for me, the solution is ESXi Patch, one of my ESX have different parch level ...

Like the last post for Spectre parch

Check the build of ESXi, and the patch compliance withe VUM

Cedric RENAULD
0 Kudos
Cuthbert01
Enthusiast
Enthusiast

I ran the patches using vSphere Update Manager.

Attached baseline for both critical and non-critical patches.

Scanned for updates and both baselines were non-compliant.

Ran the critical patches and after a rescan, both critical and non-critical showed as compliant.

So I didn't apply the non-critical patches since it showed compliant.

Performed same procedure on both hosts.

Both hosts have same build number

0 Kudos
redluna31
Enthusiast
Enthusiast

Hummm

Stupid question, but I cant ask it :

Have you some removable device, like CD mapped ?

To check try to check winch datastore are present in the VM summary vision

Cedric RENAULD
0 Kudos
Cuthbert01
Enthusiast
Enthusiast

No CD mapped.

I can't vMotion ANY VMs from host2 to host1

0 Kudos
a_p_
Leadership
Leadership

Now that both hosts have the same BIOS version, and settings as well as the same microcode, enable Maintenance Mode for host2, then open the cluster's EVC settings, and select another EVC mode (don't care about any incompatibility warnings, and do NOT try to save the settings at this point). Then set the EVC mode back to what it was before. If no compatibility issues show up, then save the settings, to have vCenter push the settings to the hosts.

André

0 Kudos
Cuthbert01
Enthusiast
Enthusiast

Thanks André.

I'll give that a try tomorrow night and ping back.

0 Kudos
Cuthbert01
Enthusiast
Enthusiast

Wasn't able to get to this yesterday.

Hope to do it in the next day or so.

One question;

In order to set EVC mode on the cluster, I will have to take Host1 (running vCenter) out of the cluster, correct?

As I recall, you can't set EVC mode if a VM is powered on.

0 Kudos
a_p_
Leadership
Leadership

No, no need to take a host out of the cluster.

You can just not set an EVC mode, that doesn't provide at least the CPU features of already powered on VMs.

André

0 Kudos
Cuthbert01
Enthusiast
Enthusiast

The cluster is at Sandy Bridge now.

Are you saying select an EVC mode higher than that, DON'T save it, then set it back to Sandy Bridge?

0 Kudos
a_p_
Leadership
Leadership

That's what I did, the first (incompatible) selection seems to be required, so that the second change (back to what it was) gets saved, and pushed down to the hosts in the cluster.


André

0 Kudos