VMware Cloud Community
Noman0118
Contributor
Contributor

URGENT: VMX connection handshake failed for mks???

Hello,

When I powered downed one of my production VMs i got the below message, upon prompt i selected "yes" and it allowed the shutdown task to complete; i was able to power on the VM shortly after. I am confussed what the message means, since this is production environment i need to make sure this is not a red flag that can lead a complete outage. I am litterary the one-man team here to handle this, also because of budget limitation we cannot afford to renew our support agreement with VMware. I have ESX 3.0/ VC 2.0 platform. Any help on this issue will be greatly appriciated.

"Error connecting: VMX connection handshake failed for mks of /vmfs/voluums/UUID/TESTVM/TESTVM.VMX Do you want to try again? Yes/No?"

0 Kudos
11 Replies
kjb007
Immortal
Immortal

Since you were able to power on the VM shortly after, I am thinking that ESX could not get a lock on the file, and was able to when you tried again. You may want to check your vmkernel and vmkwarning files to be sure. How many VMs are you running on that particular VMFS volume? If you have a lot, then you may run into this again.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
weinstein5
Immortal
Immortal

mks as I recall stands for Mouse Keyboard Screen - basically the ability to present the VM Console screen - basicall interruptin the VM Console session - by attemtping the reconnect allowed it to cllose -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
kjb007
Immortal
Immortal

Good to know.

Thanks

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Noman0118
Contributor
Contributor

Is there something specific I should look for in the vmkernal/vmkwarning files? As of now now we have one iSCSI LUN thats being served to two ESX hosts, the LUN houses 34 VM(s). The ESX host(s) were built with default settings (when VMware ISO was installed).

0 Kudos
Noman0118
Contributor
Contributor

Thanks for shedding light on the issue. I didnt quite understand the 2nd half of your input, please can you clearify? Thanks

0 Kudos
kjb007
Immortal
Immortal

Look for scsi lock / reservation problems or warnings. Other than that, try not to load more than 10-20 VM's on any one particular VMFS partition.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
Noman0118
Contributor
Contributor

Ok i will dig through the logs for it. Am i hitting something kind of limit if i have surpass 20 VM(s) per my iSCSI LUN?

0 Kudos
weinstein5
Immortal
Immortal

It is not a hard limit but a performance limit - with 20+ VMs hitting the same LUN you could be saturating the storage -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
esantos
Contributor
Contributor

I had the same problem with the VMx connection...is there a workaround for this one to be solve??

0 Kudos
sflanders
Commander
Commander

I just experienced this same problem on 3.5 Update 4. I looked for a hung process, restarted the services, rebooted the ESX server, cloned the VM (both automatically and manually) and was unable to get the VM to power on. After diffing the VMX file with a known good VMX file that did power on I learned that the bad VM was configured as RHEL5x32 while the good VM was configured as RHEL5x64. After switching the VM to RHEL5x64 it booted successfully. I hope someone finds this helpful.

Hope this helps! === If you find this information useful, please award points for "correct" or "helpful". ===
0 Kudos
udaykumar-blr
Contributor
Contributor

Some time to remediate this issue, slightly increase the allocated values of processor and memory in Resources option of a VM. This solves the issue many times...

--Uday

0 Kudos