Can you post a screenshot of what you are seeing?
Also, moving this post to the vSAN section for broader visibility.
When reading the KB by the way it talks specifically about the Distributed Switch and that not all hosts are connected to the same distributed switch, which is what is being called out. So that is the first I would recommend validating in the environment. Go to your network section and check the Distributed Switch and if the 4th host is using the same Switch or not?
Thanks for the change of section.
Here is the "current" QS window.
I had the DVS check disabled because I moved the vSAN PG from one DVS to another, and the QS "memory" did not like the change.
But all (now 6) hosts are on the same DVS using the same PG and same (only) uplink. (This is a VSAN DM 6.7 lab BTW)
Also the same (none) NTP referece and (no) lockdown.
The moving from one DVS to another seems to be the initial problem:
Is VDS Alive Distributed port groups Recommendation
common.compliant common.notcompliant Distributed Port Group on Distributed Switch dvs-vMotion is missing.
But the only way to solve this I see, the host configure, yields a "you have no permission" dialog.
hciqs.jpg 96.7 K
Seems i have the same issue whereby i have setup with Quickstart and all was fine. I then had an issue with the dedicated vSAN vDS so created a new vDS and migrated the vSAN VMKernels over and now the Quickstart is showing the below for the "Host Compliance check for Hyper Converged Cluster"
All 5 hosts have been migrated to the new vDS along with all VMKernels, etc. So this quickstart doesnt seem to be able to adapt to any changes? Is there a config file i can edit to change so this common.compliant doesnt appear?
It's likely the fact that the networking configuration changes were applied manually outside of Quickstart that it triggers this alert e.g. the settings are not compliant with Quickstart as it did not apply the current settings.
Probably going to have to just silence the Health check if it is irritating:
Thanks for the reply.
Unfortunately the quickstart doesnt feature a "Re-Run" option whereby if you want to make a change you can do this via Quickstart therefore, the only way to change things is by doing it manually as was always the case prior to this quickstart but then that triggers the alerts. You could silence it however i want this to be all green and valid so the only way around this was to evacuate the cluster, build a new cluster and start again. A lot of work however now know that this needs to be set all correct on day 1 and no changes can be made after that.
I have a case open with GSS and talked at length with them about this and they were very keen for me to provide an email so they can review this feature and improve.
Another issue with this is that i had already had VMkernel ports configured on each host prior to configuring Quickstart however when you select finish on Quickstart this deletes all VMKernel ports on hosts apart from VMK0 which is Management and then assigns vSAN as VMK1. So there are no warnings about this, or variables in the script to change other VMkernels to different numbers, it seems the script is hardcoded to use VMK1 for vSAN and delete whatever else is there. So beware of this if you have any other ports setup before running this.
I made an internal query about this:
Is there a document that notes where and how the configuration for HCI
is stored and or used ? Because I encountered a couple of issues and
would love to know how to untangle this:
-I mistakenly created the vSAN PG on the vMotion DVS, and then moved it
to the vSAN DVS. Somewhere quickstart/health check remembers it was on
another DVS and is complaining. I have been unable to clear the fault.
-Also when adding the new hosts to the cluster, quickstart knows those
are late adds and there is no config for them, but trying "configure"
results in a no permission error.
How is this supposed to be fixed ?
which resulted in the following, partly edited response:
The quickstart utility is in my opinion a work in progress. It is missing things, that is why you have that workaround for the network.
All I know is that the inventory of vsan is controled by cmmds but that is NOT running before you build the cluster. I have NOT seen any document that answers your question specifically.
So I guess we just have to wait.