Some vSAN weirdness, all was well with the BETA, and all was well for a few weeks, on another test lab.
So rebuilt, servers, and setup vSAN from new....and all gone pair shaped, the Disk Group was missing from one of the servers from the GUI, and only started 2 eligible, so thought this was wrong, and removed all groups, removed all vSAN, and removed all Hosts from Cluster....
started again, a little better, all disk groups correct across hosts
3 x hosts
1 x SSD (real not fake) 1 x HDD (magnetic) as per minimums
(have many more hosts up to 64, I know max is 32), and more SSDs and HDDs to play with, but wanted to test scaling.....on the fly....
but whatever I deploy a VM, get an error....
and I puzzled!!!
Okay, that could be the issue, on esxdev001, using esxcli vsan storage list
Used by this host: true
In CMMDS: false
for both SSD and HDD magnetic.
I assume just remove the disk group and re-create?
Message was edited by: einstein-a-go-go Okay, fixed.... This has been a weird day, because I've had three different Clusters with 5.5 U1, do the same thing.... this:- I also noticed that esxdev001, was in a weird state, it would not accept vMotion migrations, complained that the "host could not do this in this state", although it could power on VMs... It would also not go into maintenance mode, and move VMs off.. So a quick, reboot/restart and checking again Used by this host: true In CMMDS: true I think it's going to be a while, before I get a nice warm feeling about vSAN, before Production! Thanks Guys Duncan & CHogan! PS I should have done, what I tell users, "just turn on and off" is it fixed!
Thanks, I would like to see a couple of other things:
policy....
just 1
get the other info now...
nothing strange there either... hmmm
Can you run "esxcli vsan storage list" and check the "In CMMDS" for each disk? It should be set to true.
Alternatively, from RVC, run "vsan.disks_stats" and check that each host is indeed contributing an SSD & HDD.
Maybe the UI isn't reporting correctly. The error certainly suggests a lack of storage somewhere.
Okay, that could be the issue, on esxdev001, using esxcli vsan storage list
Used by this host: true
In CMMDS: false
for both SSD and HDD magnetic.
I assume just remove the disk group and re-create?
Message was edited by: einstein-a-go-go Okay, fixed.... This has been a weird day, because I've had three different Clusters with 5.5 U1, do the same thing.... this:- I also noticed that esxdev001, was in a weird state, it would not accept vMotion migrations, complained that the "host could not do this in this state", although it could power on VMs... It would also not go into maintenance mode, and move VMs off.. So a quick, reboot/restart and checking again Used by this host: true In CMMDS: true I think it's going to be a while, before I get a nice warm feeling about vSAN, before Production! Thanks Guys Duncan & CHogan! PS I should have done, what I tell users, "just turn on and off" is it fixed!
Hmmmm...i wonder how this could happened in the first. You have not per accident "cloned" some ESXi hosts?
Best regards,
Joerg