VMware Cloud Community
GiacomoNasi_IMA
Contributor
Contributor
Jump to solution

vSphere HA failover fails due to unsufficent resources, but they actually are available

Hi everyone Smiley Happy

I installed ESXi 6.7 on a server and I nested 4 other ESXi hosts on top of it.

The hosts are:

  • Host1 : with a windows server vm in it
  • Host2: used to handle failover
  • Host3: used to handle failover
  • vCSA: in which I installed the vCenter Server Appliance

The runtime performance of the hosts are:

Host1Host2
Host3
vCSA
Host1.PNGpastedImage_24.pngpastedImage_25.pngpastedImage_22.png

The physiscal host has as 4 CPUs x Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz and all the vms in it consume what is in the pic:

pastedImage_35.png

vSphere HA is enabled in the vCSA with the Admission Control DISABLED. I didn't set up nothing in particular for the shared storage: the hosts are in the same machine, using the same datastore ESXi datastore, so I guess that's enough.
When I have a failover on the host1, an error raises into the page of its VM, saying that there are not enough resources to handle it:
"Insufficient resources for vSphere HA to start the VM. Reason: {reason.@enum.fdm.placementFault}"
The same thing happens if I turn Admission Control on setting up "Host3" as a dedicated failure host. That's very weird because Host3 is empty, as you  can see from the pictures.

In both cases it doesn't make sense, because resource are actually available.

I'd like to know how to fix this, I'll wait for all your answers, Thank you!! Smiley Happy

1 Solution

Accepted Solutions
daphnissov
Immortal
Immortal
Jump to solution

No, you don't just share a VMDK, you would have to present a common datastore to all hosts in a cluster if you want HA to work. This involves some form of external storage, not sharing of a VMDK.

View solution in original post

4 Replies
daphnissov
Immortal
Immortal
Jump to solution

vSphere HA is enabled in the vCSA with the Admission Control DISABLED. I didn't set up nothing in particular for the shared storage: the hosts are in the same machine, using the same datastore ESXi datastore, so I guess that's enough.

No, that's not enough. Each ESXi host, even if nested, has its own local storage. The fact that behind the scenes they are running on top of one physical server also running ESXi makes no difference to HA. HA will not work in this case, so it really doesn't matter about admission control and the error you receive if it can't fail over anything.

GiacomoNasi_IMA
Contributor
Contributor
Jump to solution

Okay thank you very much!

So I need to share a .vdmk file right? When I try to create a shared hard disk, I need to set it up as "Thick provisioned, eagerly zeroed", but when I try to load it in another wm, it says to me that it's actually lazily zeroed. Do you know any soluzion for this?

Thanks Smiley Happy

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

No, you don't just share a VMDK, you would have to present a common datastore to all hosts in a cluster if you want HA to work. This involves some form of external storage, not sharing of a VMDK.

GiacomoNasi_IMA
Contributor
Contributor
Jump to solution

Okay, and do you also know how can I tell to my nested hosts to use the same VMFS datastore?

Reply
0 Kudos