VMware Cloud Community
kraughl1
Enthusiast
Enthusiast
Jump to solution

ESXi Error Panic: Unable to allocate memory

We have seen the following error;

Panic: Unable to allocate memory

Followed by what appears to be a host reboot. However I did not see any VM's actually get restarted via HA. All the VM's appeared to stay running on the host. The error message is from the hostd.log. The host disconnect from vcenter for 16 minutes yet the VM's appeared to stay running.

Has anyone else come across a similar issue?

Reply
0 Kudos
1 Solution

Accepted Solutions
pwilk
Hot Shot
Hot Shot
Jump to solution

Hi kraughl1,

If this error appeared on ESXi you could try following the steps described below:

1. Connect to the affected ESXi host with an SSH session.

2. Run this command to set the grpID of the vpxa process to a variable:

grpID=$(vsish -e set /sched/groupPathNameToID host vim vmvisor vpxa | cut -d' ' -f 1)

3. Run this command to increase the max memory allocation of the vpxa process to 400MB (default value is 304):

vsish -e set /sched/groups/$grpID/memAllocationInMB max=400 minLimit=unlimited

4. Verify that the max memory allocation of the vpxa process is changed:

vsish -e get /sched/groups/$grpID/memAllocationInMB

For example:

vsish -e get /sched/groups/$grpID/memAllocationInMB

sched-allocation {

min:0

max:400

shares:0

minLimit:-1

units:units: 3 -> mb

}

Let me know if that helps. By the way, what version of ESXi are you on?

Cheers, Paul Wilk

View solution in original post

Reply
0 Kudos
4 Replies
pwilk
Hot Shot
Hot Shot
Jump to solution

Your vCenter Server has ran out of available RAM or it was a victim of a bug described here: VMware Knowledge Base.

How many VMs, ESXi's servers etc. are present in your environment? Maybe you should consider increasing the RAM on the vSphere Server according to the follwoing VMware recommendations? VMware Knowledge Base

Let me know if that helps

Cheers, Paul Wilk
Reply
0 Kudos
kraughl1
Enthusiast
Enthusiast
Jump to solution

Thank you for the help. The error happened on the ESXi host not vCenter. Would the same apply to ESXi?

Reply
0 Kudos
SureshKumarMuth
Commander
Commander
Jump to solution

Just to reconfirm was the rebooted or it just disconnected from vCenter for 15 min ? If the host was not rebooted and disconnected from VC alone with memory allocation issue, then it is the hostd which is not getting enough memory to execute tasks.

Can you post the hostd log file please ? Also let us know what is the ESXi version with build number.

Regards,
Suresh
https://vconnectit.wordpress.com/
Reply
0 Kudos
pwilk
Hot Shot
Hot Shot
Jump to solution

Hi kraughl1,

If this error appeared on ESXi you could try following the steps described below:

1. Connect to the affected ESXi host with an SSH session.

2. Run this command to set the grpID of the vpxa process to a variable:

grpID=$(vsish -e set /sched/groupPathNameToID host vim vmvisor vpxa | cut -d' ' -f 1)

3. Run this command to increase the max memory allocation of the vpxa process to 400MB (default value is 304):

vsish -e set /sched/groups/$grpID/memAllocationInMB max=400 minLimit=unlimited

4. Verify that the max memory allocation of the vpxa process is changed:

vsish -e get /sched/groups/$grpID/memAllocationInMB

For example:

vsish -e get /sched/groups/$grpID/memAllocationInMB

sched-allocation {

min:0

max:400

shares:0

minLimit:-1

units:units: 3 -> mb

}

Let me know if that helps. By the way, what version of ESXi are you on?

Cheers, Paul Wilk
Reply
0 Kudos