Hey everyone, hoping to get an expert to help out with my current situation. I have a VM that works flawlessly in West Coast running on top of an ESXi v6.5 on Dell R620. I need to migrate it to East Coast for a DC migration project.
The migration is easy enough, but it won't boot on the new ESXi v6.5 on Dell R640. I tried putting it on an R630 in the destination DC and it works too... it just won't work when put on an R640 which is where I have to put it (customer required).
I'm not sure if it's the CPU architecture that's incompatible, or simply it's the storage controller. The VM would always have a kernel panic and I cannot seem to boot into the single user mode. Has anyone seen this issue before?
2020-12-12T08:19:51.236Z| vcpu-0| I125: Vix: [298360 vmxCommands.c:7212]: VMAutomation_HandleCLIHLTEvent. Do nothing.
2020-12-12T08:19:51.236Z| vcpu-0| I125: MsgHint: msg.monitorevent.halt
2020-12-12T08:19:51.236Z| vcpu-0| I125+ The CPU has been disabled by the guest operating system. Power off or reset the virtual machine.
There are two posibilities
1. EVC enabled on source but disabled on destination cluster or in reverse way
2. you required upgrade kernel version as per this KB
Thanks for your reply!
Unfortunately the destination ESXi host is a standalone due to a licensing requirement from our database. I can do a cold migrate but it won't boot.
If I migrated the VM back to a Dell R630 ESXi, the VM boots up just fine. I've been able to confirm that the issue is caused by CPU incompatibility. Are you aware of any process to deal with such case?