VMware Cloud Community
xingchan
Contributor
Contributor

unable to disable per-vm EVC

Recently I upgraded a test vm the hardware version to 14. I noticed that EVC was enabled for this vm. When I shutdown the vm and looked at the per-vm EVC setting, EVC was disabled. When I started the VM, EVC was enabled again.

EVC is disabled on host and cluster level. I also looked at the vmx file: evcCompatibilityMode = "FALSE".

I'm running vCenter on build number and ESXi build 6.7.0, 10302608.

How can I disable EVC for this vm?

0 Kudos
26 Replies
Igubu
Contributor
Contributor

Hi All,

It's been a while - but good news is the very friendly and expert engineering teams from Vmware was looking at this over the last month or two in detail. 

I'm sure they will officially respond and explain it better,  and I believe there might be some documentation additions or changes to explain, as well as possibly some solutions to assist with resolution.

What is seen on the UX is confusing and might change in future to better explain, but the behavior is correct.  It's better that a more technical explanation is given by Vmware, but the short easy one is merely that the multitude of different instruction sets and features between CPU's are very close, and sometimes overlap  - so some features that might be visible on a baseline X is also seen on a slightly older cpu version - and that is what the UX shows.

The correct workaround to sort the behavior out for my use case is to either enable per-VM EVC to match the required level ("highest level supported on all hosts in clusters when migrations are required) - or have EVC enabled on the cluster BEFORE moving Vm''s to it.

It is not recommended to change HW versions to older/lower ones, and most other workarounds around config files etc are not ideal in my opinion. 

I'm sure there will be some more official answer soon, as well as indication of possible solutions to assist with changed/enabling per VM EVC mode in an automated way - but it will take some time as there are many complexities around this - but watch this space!

For those who have capacity, and need to resolve - a reboot is required either way - so one solution would be create a temp/new cluster with hosts - enabled the correct EVC mode, and shutdown/migrate the VM's to it (the ones with a too high EVC applied). Once done and all vm's are moved off the old cluster, enable the same EVC level on the old cluster and you can then migrate as needed with no more issues. Where this is not possible, a shutdown to enable per VM EVC mode is the other solution - at the moment though a manual process per vm, also requried a shutdown/reboot. This is of course not ideal for a large estate where hundreds or more of VM's are affected - in this case - lets hope some automation tools or processes are going to be released from Vmware to assist.

0 Kudos
nianliu
Contributor
Contributor

experiencing the exact same issue and still no resolution. do we have any new update on this?

0 Kudos
Igubu
Contributor
Contributor

Hi,

I have not received the confirmation of update of documentation yet. I can explain though that the behavior is " Expected " in some ways, because of the complexity of EVC, and especially when CPU generations are fairly close to each other.

For me I wanted to confirm it's not an issue/bug that will be fixed, before I migrated all SQL instances to it (which is why we added 2x dedicated hosts to a cluster - for all SQL) -but after this, it's confirmed for me that there isn't a way to "fix" this really. If you are planning to migrate VM"s to/from the new hosts and older ones, and if it can be a separate cluster, enable the EVC level on it to match the older hosts. That old saying " enable BEFORE moving". The odd one or to on the older hosts that might be detected as a generation newer - which will refuse to move as you have forced an older EVC on the new cluster - will have to be shut down and moved the first time.

I'll ping the engineer again for an update and comment here if he replies.  

0 Kudos
BigMike23
Enthusiast
Enthusiast

any news on this solution?

Igubu
Contributor
Contributor

Hi, 

I have reached out to engineering tonight - will post any updates receive - but the explanation after numerous testing by them is that it's not a "fault" as such, it's because of many complex things used and supplied by for example Intel which determines "broadly"  the different EVC levels. To me, even if it's not a solution as such, it's at least an explanation and confirmation that there isn't something inherently wrong, you just need to know about it.

As mentioned before, there is a workaround that solves the main issue here - how to migrate the vm's between older and newer hosts. The issue for me is the fact that older generation hosts incorrectly detects an EVC compatibility mode higher than what the physical cpu's detect and applies this on boot - even though EVC Is not enabled on the cluster or the individual VM. The fix is easy - force the correct EVC on the cluster or the VM itself, but both requires a reboot.

I'll post if there is any update tomorrow.

 

0 Kudos
GeoPerkins
Enthusiast
Enthusiast

I am finding a similar problem and was very surprised at the result.

1. Used a live migration from legacy hardware (where cluster has EVC enabled, cascadelake), destination of live migration is a new cluster with EVC icelake. 

2. As expected VM arrives with cascadelake.

3. Powered off VM.

4. Powered on VM, expecting VM to adopt cluster's icelake EVC mode. Surprise! It does not.

This documentation https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-A1C095EF-1B0F-4C1... says "For powered on virtual machines with per-VM EVC disabled, the VMware EVC pane shows the EVC mode that the virutal machine inherits from its parent EVC cluster or host".

A. No EVC mode visible in the GUI. To what "EVC pane" is the above documentation referring to?!

B. The VM in my test did not inherit the EVC mode.

 

Here is the PowerCLI command I used to collect data before, while powered off, and after power on: 

Get-VM | Select-Object -Property Name,PowerState,@{Name='MinRequiredEVCModeKey';Expression={$_.ExtensionData.Runtime.MinRequiredEVCModeKey}},@{Name='Cluster';Expression={$_.VMHost.Parent}},@{Name='ClusterEVCMode';Expression={$_.VMHost.Parent.EVCMode}} | FT

 

 

0 Kudos
GeoPerkins
Enthusiast
Enthusiast

I believe I have stumbled upon the solution.

The VM in question was still had virtual hardware level 13 (ESXi 6.5 compatible). I scheduled the compatibility upgrade and restarted, now the VM is at level 19 (ESXi 7.0U2 compatible) and the VM has adopted the icelake EVC mode.

In addition, there is now an "EVC pane" in the vCenter GUI on the Configure tab.

The PowerCLI also agrees, showing the VM having an attribute MinRequiredEVCModeKey=intel-icelake

0 Kudos