The server had a drive go bad. I replaced it but it wouldn't join. So I had to power down all the hosts and put the server into maintenance mode.
I powered it off then set the drive to global guest and it started to rebuild the drive. When the server powered back up I was showing this error I have attached. I rebooted again and hit the option for recovery and selected the last set point. The server then booted up and I was shown the yellow screen all looked well. I tried to login to then power up the hosts on the server and the password that was setup before all this no longer works. I tried all the passwords that I could think of and blank and still nothing I don't have anything monitoring that has been disabled. the server and the username I am using is root.
Hello.
You can give more details
the ESXI host disks are internal (inside the server).
ESXI host disks are external in some Storage, which brand.
Do you have RAID configured? and what type (RAID 5, RAID 6 or RAID 1)
Do you have vSAN configured?
Do you have a vCenter to manage the ESXi Host?
You can give more details
the ESXI host disks are internal (inside the server). - Yes they are internal
ESXI host disks are external in some Storage, which brand. - No
Do you have RAID configured? and what type (RAID 5, RAID 6 or RAID 1) Raid 5
Do you have vSAN configured? - No
Do you have a vCenter to manage the ESXi Host? Yes- But I had it setup so I could also login local as well. I cant login local, ssh or web.
Some of the hosts had been installed on the same storage as esxi not sure if thats a issue also. This is the part that had the drive go bad.
Hi
For all visitors that use the normal vmware-slang your post does not make a lot of sense.
> Some of the hosts had been installed on the same storage as esxi not sure if thats a issue also.
In normal vmware-slang hosts are ESXi-servers - you use "hosts" to talk about VMs instead.
If you want answers I recommend to avoid problems and just adapt to the slang we all use here.
Your screenshot looks like the Fat-filesystem of the bootbank partition is corrupt.
The corruption may affect only the state.tgz (which stores your current config, passwords and so on)
but that is quite unlikely.
Raid 5 rebuilds tend to corrupt VMFS-volumes and so you may have a more serious problem than a harmless "cant login" issue.
Suggestion: Install a fresh ESXi to a USB-boot-stick - make sure not to overwrite any existing datastores.
Then check if the new ESXi still can use the original datastore and if the VMs (or what you call hosts) are still readable.
Be careful when you try this - it is super easy to misconfigure the new install and do a lot of harm.
I would suggest to boot into your favorite Linux LiveCD first and check the VMFS-datastore before you try any more dangerous options.
Ulli
ESXi is the host, it is hosting your VMs regardless for what software they have installed inside them.
Your VMs are guests, and the OS each one runs is the guest OS.
I was able to get it back up and running The password was lost to me and the whole dev team. I was able to reset it the server booted up and I can see the storage but its not showing as a datastore. One of two datastores is missing I can see both sets of storage but one datastore is missing. Looking at it it seems tp be partitioned already but not showing in vmware under the host.
Try
vmkfstools -V
wait a while - then
ls /vmfs/volumes/
Do the datastores show up now ?
Ulli
Boa tarde!
Estamos com esse problema,