jrhaakenson's Accepted Solutions

My issue is fixed and resolved now.  It seems that a VM i had migrated from one environment to the vSAN datastore on this environment was corrupted somehow.  It may have been corrupted by some Veeam ... See more...
My issue is fixed and resolved now.  It seems that a VM i had migrated from one environment to the vSAN datastore on this environment was corrupted somehow.  It may have been corrupted by some Veeam backup software running backup tasks overnight and messing up the VM file structure on the vSAN datastore but I'm not completely sure on that.  The migrated VM on the datastore was in a directory of VM_Name->VM_Name->VM files rather than just VM_Name->VM files.  There was an extra VM_Name directory on the vSAN datastore (possibly created by Veeam).  So I completely deleted the troublesome VM directory on the vSAN datastore, through vCenter (luckily this VM was unimportant), rebooted the ESXi host, and upon reboot the host services all started and the host reconnected to vCenter and the vSAN network again.  I don't really understand how a single powered off VM on the vSAN datastore could cause the ESXi host management services to fail starting up (even after multiple restarts) but that seems to be what was causing the issue.  All resolved now.
Good news.  After 2.5 months, I have a working solution for this.  The binary download repository for vRLCM is https://download2.vmware.com.  There are a large number of other VMware download reposit... See more...
Good news.  After 2.5 months, I have a working solution for this.  The binary download repository for vRLCM is https://download2.vmware.com.  There are a large number of other VMware download repositories that constantly change IP addresses, but we won't get into that here.  First make sure your DNS server can resolve VMware's plethora of download repositories.  But my vRLCM DNS could so this was not the issue. What pointed me in the right direction was running the command: curl -v https://download2.vmware.com.  This output a refused TLS 1.2 connection.  I fixed a similar issue on my vIDM appliance by modifying the /etc/ssh/sshd_config file line 117 to remove some troublesome ciphers.  So I accessed my vRLCM sshd_config file located in /etc/ssh/sshd_config and scrolled down to line 117.  The first two ciphers listed are aes256-gcm@openssh.com and aes128-gcm@openssh.com.  By removing these two ciphers and saving the sshd_config file I was able to finally open a TLS session with https://download2.vmware.com and download the binary files I needed to upgrade my managed vRealize appliance from the vRLCM.  My cipher list on line 117 of sshd_config only contains aes256-ctr,aes192-ctr,aes128-ctr now and this seems to work.  I'm not sure what the issue with the first two ciphers mentioned above was or why they were allowed to certain VMware update repositories (such as vrealize-update.vmware.com) but not download2.vmware.com.  Furthermore I'm not sure why these two ciphers by default do not affect other users but affected my appliance.  I requested answers to these questions from my open VMware support ticket but they have not been able to provide answers at this time.  Still I hope this information is useful to any users who experience update issues with vRealize appliances.  I have used it to fix update issues on both vRLCM and vIDM in my environment.
I found the solution.  In vSphere under Configure->Advanced Settings, the Advanced vCenter Server Setting vpxd.certmgmt.mode was configured as custom.  I changed it to thumbprint and it let me add th... See more...
I found the solution.  In vSphere under Configure->Advanced Settings, the Advanced vCenter Server Setting vpxd.certmgmt.mode was configured as custom.  I changed it to thumbprint and it let me add the ESXi hosts.  I believe our intent is to manage our own certificates on the ESXi hosts, but I'll need check with my certificate admin to see how we are doing it.   If this value is set to custom does that mean that a custom certificate must be installed on the ESXi host for it to be managed by vSphere?  Likewise if it is set to Thumbprint, will vSphere add the SSL thumbprint and manage the host that way?
Yes. I was able to fix my AD joining issue by synchronizing the time correctly across the board.  Since I don't have a valid NTP server to use for the ESXi host, I had the ESXi host using the Domain ... See more...
Yes. I was able to fix my AD joining issue by synchronizing the time correctly across the board.  Since I don't have a valid NTP server to use for the ESXi host, I had the ESXi host using the Domain Controllers as an NTP server.  This is generally not a best practice to sync a host with a VM running on that host.  As a result, the ESXi host's time was wrong and subsequently VMs were pulling time from the host rather than the Domain Controllers.  This included my VCSA which was pulling the wrong time from the host.  It's not an issue with the Windows VMs, because they sync time correctly with the Domain Controllers via their Group Policy settings.  But VCSA wasn't set to synchronize with the Domain Controllers so it was pulling its time from the ESXi host which was incorrect.  After changing the host to manual time, I then set the VCSA to synchronize with the Domain Controllers.  After the VCSA had time synchronized with the Domain Controllers I was able to join AD, restart, and login with my AD accounts once again. I think what got me in the wrong direction in the first place was that my VCSA time was close (maybe about 15-30 minutes off) but not 10 hours as you experienced.  So I didn't suspect time issues initially.  At any rate, time synchronization between Domain Controllers and the VCSA was the cause of this issue.  Thanks for your contribution.
Thanks for the suggestion.  I was able to verify that I did have my Domain\SA Admin Accounts set on the ESXi hosts themselves.  Since I am running ESXi version 6.7.0 I had to find the Host Permis... See more...
Thanks for the suggestion.  I was able to verify that I did have my Domain\SA Admin Accounts set on the ESXi hosts themselves.  Since I am running ESXi version 6.7.0 I had to find the Host Permissions location by right-clicking on Host in the web-interface and selecting Permissions there.  Yes the Domain\Group or User accounts do need to be applied in this location. I however determined the cause of my specific issue.  The Host Names of my ESXi hosts were too long.  They were more than 15 characters long and so once joined to my Windows Server 2016 domain, the domain automatically gave them a Pre-Windows 2000 name of less than 15 characters.  It appears that ESXi follows the Pre-Windows 2000 rules when it comes to Host Names on a Windows domain.  So my ESXi Host Names technically didn't match their domain names and thus I was not able to authenticate a domain account with the ESXi host.  After shortening my ESXi Host Names to less than 15 characters, re-joining them to the domain, and once again verifying all information found in my OP along with verifying my Domain\Group Account was added to the ESXi host permissions itself, I am able to authenticate to the ESXi hosts with my domain credentials.
After troubleshooting for most of the day my ESXi host reconnected with the vCenter server.  I'd say this is PFM cause I'm not exactly sure what fixed it, but it started working after I ran the R... See more...
After troubleshooting for most of the day my ESXi host reconnected with the vCenter server.  I'd say this is PFM cause I'm not exactly sure what fixed it, but it started working after I ran the Restart Management Agents option from the ESXi KVM GUI.  This is located under F2-> Troubleshooting Mode Options-> Restart Management Agents.  The odd thing is I had spent a large portion of the day in the ESXi host Shell trying to restart all services using the services.sh restart command.  This method would always result in hanging on certain services, most notable the lsass service which is an Active Directory service which again points to the unintentional removal of the DNS entry for this host as the likely culprit that started all of this.  But I would assume that Restart Management Agents and services.sh restart would do the same thing.  Maybe not?  At this time, it still looks like Restart Management Agents is running in the KVM GUI.  The GUI still seems to be running very slow, but the host reconnected to vCenter and I am able to login with root using the vSphere/web console once again.  I'm going to guess restarting the management agents successfully restarted something that allowed the server to resolve correctly from the DNS server once again.  I need to get smarter on this to include High Availability which seemed to "work" for the VMs on this host during this whole ordeal but also reported errors.  
Yes this Group Policy setting was the culprit for my environment as well.  But the other information in this thread was very useful as well.  To summarize the fix action: The Network Security:... See more...
Yes this Group Policy setting was the culprit for my environment as well.  But the other information in this thread was very useful as well.  To summarize the fix action: The Network Security: Configure encryption types allowed for Kerberos in Group Policy needs to be configured with a checkbox to allow RC4_HMAC_MD5.  The policy setting is located at Computer Configuration> Windows Settings>Security Settings>Local Policies>Security Options>Network Security: Configure encryption types allowed for Kerberos. This should allow a Windows 10 machine to utilize the vCenter Windows session authentication checkbox to work during login to the vSphere Web Client. The other fix actions to get the checkbox un-greyed and to get the Enhanced Authentication Plug-in to work in IE involved adding the vCenter login screen URL to the browser's Intranet Sites list.  This may also need to be completed in Group Policy under Site to Zone Assignment List with a value of 1 for Intranet.  Getting the Enhanced Authentication Plug-in to work in Firefox involved browsing to https://vmware-plugin:8094 and permanently storing this exception in the browser. I'm still not able to get the Enhanced Authentication Plug-in working in Edge at this time.  I am also working through untrusted certificates from the VCSA for which I have been working in the VCSA certificate manager and regenerating/reissuing certificates, downloading them, and importing them to the proper certificate stores for Windows and browsers, but no luck here yet.  My certificate issue seems to be involved with the VCSA CN=<IP Address> whereas  my generated certificates CN=<hostname>