A VM registered on a host does not necessarily mean the (only) data component resides on that hosts storage - please validate that is the case here.
Check and share the 'reason' messages on the far right of both fail power-on, might make more sense.
That far right column is not 'Reason', but a 'Result' column. It's just a power-on failed message.
I can 100% confirm that the PFTT=0 and SFTT=0 component of this VM resides in this very ESXi host in the P-Site because this is a 2-node VSAN 'Stretched cluster'.
In other words, this is the only ESXi host on the P-Site in this cluster.
As I said, even though the Witness-Site and the S-Site ESXi are both powered-off, I expected a PFTT=0 and SFTT=0 VM registered and "affinity" at the P-Site ESXi to be able to boot on its own.
I hope I'm understanding this correctly.
If it is configured correctly this should work as by default (unless changed) vswp Objects policy have 'Force Provisioning=1' e.g. it will create an FTT=0 vswp if there is only one available Fault Domain for component placement. I tested this just now and it will fail just as it does in your case (vswp creation failure) if you have the Preferred Fault Domain incorrectly configured - just because a site is named 'Preferred' doesn't make it automatically the Preferred site for placement, the site called 'Secondary' can be selected as the clusters Preferred site. I tested this just now and if you have this set to the site that is unavailable it will fail to create vswp, check this via Cluster > Configure > Fault Domains & Stretched Cluster > Fault Domains > Select the remaining site and click the Star button above.
Thanks for testing it!
Yep, checked everything.
The Preferred-Site is THE Preferred Fault Domain, which is also the site I'm trying to boot the PFTT=0 VM with affinity pinned to the same site. (See screenshot).
I see you have reproduced the same error in your environment.
Have you found a solution too? I'm starting to think this might be a glitch or something.
When I tested this I first placed node on non-preferred site and Witness in MM, then validated that it did indeed create an FTT=0 vswp on the Preferred site on VM power-on, I then tested changing the Preferred FD to the other site (with this node still in MM) and it failed to create vswp.
Just a hunch - can you check/share the output of this run on either data-node? There should be only one entry, if there are more that could be the problem:
# cmmds-tool find -t PREFERRED_FAULT_DOMAIN
We probably started down the wrong path at the start here - first thing to validate is that you can indeed create FTT=0 data Objects with just 1 Preferred node/FD available (and more specifically FP=1,FTT=1 vswp Objects forced as initial placement as FTT=0).
To rule out a number of possible issues, please create a new VM with the same PFTT=0,SFTT=0,Preferred=Site1 Storage Policy you used for the VM above and validate that the components get created and then try to power it on.
If this succeeds then there is a problem with the web01 VM and/or its vswp (e.g maybe it was FTT=1(default and not tied to the VM SP) and is still present and unhealthy), if this doesn't succeed to create the VM Objects at all or succeeds in creating the VM Objects but can't create vswp this should clarify a bit more on what factors may be problematic here.
I went through the tests you suggested, and strangely everything now behaves correctly.
But that begs the question: Why didn't it the last time?
Anyway, here is what I did and what happened:
1. Created a new Storage Policy with PFTT=0 and Affinity to the Preferred-Site.
2. Created a new VM and assigned it this new Storage Policy, put it on the ESXi at the Preferred-Site.
3. Put the Secondary-Site into Maintenance Mode.
4. Shutdown the Witness-Site.
5. Log on to the Preferred-Site ESXi Embeded Host Client.
6. Boot this new VM. It works. (As it should)
7. Try booting the VM that didn't work (the one at the beginning of thread). Strangely, it works too. Note: This VM is still using my 'old' Storage Policy, not the new one created in Step1.
So that confirmed my understanding of how PFTT=0 in a VSAN Stretched Cluster works.
But I still need to find out why sometimes this doesn't work. The issue is too much of a risk during actual disaster scenarios.
If this is not a test cluster (e.g. nested ESXi/random whiteware) I would advise attempting to replicate the initial issue you had and opening a support request with vSAN GSS - root cause of such issues are not determinable from just an error message and would require more in-depth analysis. In your case basic initial measures such as using vsan.check_state -r via RVC (provided vCenter is available) may have resolved this but we can't know this after the fact.
It's more or less a test environment but with... certain uptime obligations.
Thanks for all your help. Next time there is a down window I'll try to replicate the issue and look in to the RVC command.