VMware Cloud Community
MariusRoma
Expert
Expert
Jump to solution

Insufficient resources to satisfy configured failover level for vSphere HA

I have a vSphere 5.0 based cluster based on 2 ESXi 5 nodes.

When I attempt to start a Vm I get an error saying that there are "Insufficient resources to satisfy configured failover level for vSphere HA".

If I power down another VM the problem disappears and I can power on my VM.

If I attempt to power on the VM I previously powered off, I get the error again.

It looks obvious that there is a lack or resources or that some parameter is misconfigured, but, even after reading forums and manuals, I am unable to locate the critical resource or the parameter that is not correct.

Based on my experience neither the ESXi servers or the vSphere server are overloaded.

I have another similar cluster hosting a larger number of VMs with no similar problem.

What performance counter and what parameter should I check to identify the bottleneck?

Regards

marius

1 Solution

Accepted Solutions
jjkrueger
VMware Employee
VMware Employee
Jump to solution

To check the reservations for VMs, I would select the Cluster in your vCenter inventory, then select the "Resource Allocation" tab. That will show all the cluster's child objects (VMs, Resource Pools, vApps). There you can look at the resource settings for CPU and Memory. You'll want to look at the "Reservations" column.

To find the slot size and available slots, look at the "Summary" tab of your cluster. There will be a tile labelled "vSphere HA". In that tile, there will be a link for "Advanced Runtime Info" which will pop up a new window with the slot information.

When HA's Admission Control policy is set to "Number of Host failures the cluster will tolerate", HA has a very pessimistic view of your resources, as it has to be able to handle all possibilities.

Another option would be to change the Admission Control policy in the HA Cluster settings to "Percentage of resources reserved for failover". This will reserve a chunk of resources to be used by HA in the event of a failover, rather than trying to calculate out the size of individual VMs. With a 2-node cluster, I would think it a relatively safe bet to set these values to 50%, as your worst case HA scenario would be losing one host out of 2.

View solution in original post

5 Replies
a_p_
Leadership
Leadership
Jump to solution

The reason for this is most likely a reservation on one or more of the VMs which results in a large HA slot size. If HA Admission Control is configured for "Host failures tolerated" the available slots are what counts in order to be able to start VMs.

André

MariusRoma
Expert
Expert
Jump to solution

Thank you for your message, but where can I check to see if a reservation is set for a given VM and for the available slots?

Regards

marius

0 Kudos
aravinds3107
Virtuoso
Virtuoso
Jump to solution

To Check reservation for VM

Select VM->EDIT Settings->Click on Resouce tab

SGPhoto_2012_04_02 17_57_47.png

To Know more about the Slot Size calculation suggeste reading HA Deepdive

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful |Blog: http://aravindsivaraman.com/ | Twitter : ss_aravind
jjkrueger
VMware Employee
VMware Employee
Jump to solution

To check the reservations for VMs, I would select the Cluster in your vCenter inventory, then select the "Resource Allocation" tab. That will show all the cluster's child objects (VMs, Resource Pools, vApps). There you can look at the resource settings for CPU and Memory. You'll want to look at the "Reservations" column.

To find the slot size and available slots, look at the "Summary" tab of your cluster. There will be a tile labelled "vSphere HA". In that tile, there will be a link for "Advanced Runtime Info" which will pop up a new window with the slot information.

When HA's Admission Control policy is set to "Number of Host failures the cluster will tolerate", HA has a very pessimistic view of your resources, as it has to be able to handle all possibilities.

Another option would be to change the Admission Control policy in the HA Cluster settings to "Percentage of resources reserved for failover". This will reserve a chunk of resources to be used by HA in the event of a failover, rather than trying to calculate out the size of individual VMs. With a 2-node cluster, I would think it a relatively safe bet to set these values to 50%, as your worst case HA scenario would be losing one host out of 2.

MGanu
Contributor
Contributor
Jump to solution

Hi,

I have changed HA parameter to 25% & I was able to power on vShield Manager appliance on VM5 Update1,

Thanks,

Madhav

0 Kudos