dandanfireman
Contributor
Contributor

vCenter HA/DRS insufficient Memory Error

I am a bit new to this, so please be gentle. I have setup a test environment for DRS/HA. I have a vcenter server, 2 x esxi hosts, iscsi shared storage and 1 linux VM. I setup the cluster first for DRS and was able to successfully migrate the VM from one host to another without issue. Now, proceeding on, I enabled HA. Now once HA is enabled, I get errors attempting to turn on the VM. Keep in mind the LInux VM only has 256M of ram and default reservation settings. The error is:

DRS cannot find a host to power on or migrate the virtual machine.

The host does not have sufficient memory resources to satisfy the reservation.

What am I doing wrong here?

0 Kudos
6 Replies
admin
Immortal
Immortal

What numbers do you see for the total and available memory/cpu for the cluster in the Resource Allocation tab of the cluster? Can you try disable HA admission control and see if the numbers change? HA reserves capaacity for failover but it is strange that you can't power on even 1 vm with HA admission control enabled.

Elisha

0 Kudos
wallakyl
Enthusiast
Enthusiast

What kind of hardware have you got?

0 Kudos
dandanfireman
Contributor
Contributor

The physical hardware is running inside of vmware workstation 7. Each esxi host is assigned 2G. The resource tab for the cluster only shows 200-300M. Any reason this would be so?

0 Kudos
admin
Immortal
Immortal

Did you try disable HA admission control?

0 Kudos
dandanfireman
Contributor
Contributor

Still a no go. Same error with admission control enabled. I think one of the original responders was on to something with the low amount of memory showing in the resource tab for the cluster....but I am not sure how this is read or how often this is calculated.

0 Kudos
admin
Immortal
Immortal

Have you tried powering on the vm with HA disabled completely? What about if you move the host out of the cluster and power on the vm on a standalone host? The resource allocation tab of the cluster shows the aggregate resources on all the cluster hosts that are available for vms so it seems like the vmkernel and management agents are consuming more resources than expected, leaving less for vms. If you move the hosts out of the cluster and go to the host's resource allocation tab, you'll see the resources available for vms on that host. Seems like it is about 100-150MB on each host which is probably not enough to start even one vm depending on the overhead memory requirements.

0 Kudos