VMware Cloud Community
liquidsparky
Contributor
Contributor
Jump to solution

Error 17332

I am having a serious issue with my VM.

I made a new VM and on it installed CentOS 7 using the CentOS 4/5/6 template. I began an ssh file transfer task and after a random period of time, usually between 1 and 7 hours, the VM crashes with error 17332.

I followed the suggestions in this KB article to fix it: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=10213

I changed the NUMA setting to 0, even though I did not see a NUMA file at the related path.

I also did the following using vCLI:

- unregistered the VM

- modified the VM's .vmx file to add this line at the bottom:

memoverhead=0

- registered the VM

However, in the vClient, I do not see that the memory overhead has any value, and I continue to have this crash -- even more frequently than before, it seems.

I tried reloading the VM through vCLI, and the memoverhead value is still not being read from the .vmx file. It's there still in the .vmx file, but ESXi is not reading it.

Please advise. I tried contacting my host, but they do not have a support account with VMWare for my machine, and so I'm on my own, which is really not nice.

I am using CentOS 7 on a VM, and it is using 2 vCPU's, each with 10 cores, so 20 cores total. It is using 16 GB of RAM.

I would like to know:

1) Why is VMWare ESXi not reading the "memoverhead=60" line from the .vmx file?

2) Why is memoverhead set to 60 and not some other value?

3) Why is this problem from back in 2007 which was supposedly fixed resurfacing now again with 5.5 (Update 1) in 2014, seven years later?

4) Does this have anything to do with the fact that VMWare ESXi did not come with a specific setup for CentOS 7? This VM is the only VM that is having this problem, and it seems to only happen when I present it with an extended ssh / scp task.

0 Kudos
1 Solution

Accepted Solutions
liquidsparky
Contributor
Contributor
Jump to solution

I found out the cause of the problem.

I had improperly configured the memory such so that each VM was configured in ESXi to use the full 16 GB of each slot of RAM. I did not account for the other memory requirements on the server, here, and by bringing these values down to 14 GB and 13 GB, it seems to have solved the issue while giving the system a little bit of leeway in extra RAM. The VM was crashing at the kernel level because when it reached the maximum amount of available memory and then looked for more, which it expected to have, it was denied access or whatnot, and there would have been something along the lines of swapping issues basically amounting to a situation of completely running out of memory and not being able to do anything about it. I diagnosed this issue by looking at logs in the VM and on the ESXi server.

I also simply removed the memoverhead value from the .vmx file and set the NUMA Monitoring value back to 1. I think the memoverhead value was being used, but it was recognized as being far too small at 60, and it seemed to had been somewhat-invisibly implemented with a value of proper calculation. Anyway, specifying it in the .vmx file was unnecessary.

View solution in original post

0 Kudos
3 Replies
Jayden56
Enthusiast
Enthusiast
Jump to solution

0 Kudos
liquidsparky
Contributor
Contributor
Jump to solution

Why is ESXi not reading this line from my .vmx file?

memoverhead=60

0 Kudos
liquidsparky
Contributor
Contributor
Jump to solution

I found out the cause of the problem.

I had improperly configured the memory such so that each VM was configured in ESXi to use the full 16 GB of each slot of RAM. I did not account for the other memory requirements on the server, here, and by bringing these values down to 14 GB and 13 GB, it seems to have solved the issue while giving the system a little bit of leeway in extra RAM. The VM was crashing at the kernel level because when it reached the maximum amount of available memory and then looked for more, which it expected to have, it was denied access or whatnot, and there would have been something along the lines of swapping issues basically amounting to a situation of completely running out of memory and not being able to do anything about it. I diagnosed this issue by looking at logs in the VM and on the ESXi server.

I also simply removed the memoverhead value from the .vmx file and set the NUMA Monitoring value back to 1. I think the memoverhead value was being used, but it was recognized as being far too small at 60, and it seemed to had been somewhat-invisibly implemented with a value of proper calculation. Anyway, specifying it in the .vmx file was unnecessary.

0 Kudos