VMware Cloud Community
beeguar
Enthusiast
Enthusiast
Jump to solution

Disk Groups are not visible from cluster. vSAN datastore exists. 2 hosts (out of 8) in cluster don't see the vSAN datastore. Their storage is unrecognized.

http://i.imgur.com/pqAXtFl.png

http://i.imgur.com/BnztaDD.png

Not sure how to even tear it down and rebuild it if the disk groups aren't visible. Disks are in good health on each host's storage adapters.

Currently running the latest build of vCenter 5.5. Hosts are running 5.5 build 2068190

Just built it. Happy to tear it down and rebuild. Just not sure why it isn't visible on two hosts and the disk groups are only recognized from 3 hosts when more are contributing. Also odd that I can't get the disk groups to populate in vCenter. Tried two different browsers (chrome and IE).

Reply
0 Kudos
1 Solution

Accepted Solutions
beeguar
Enthusiast
Enthusiast
Jump to solution

I've got it working now.

All identical 5.5 builds on the ESXi hosts. All hosts are homogeneous from a CPU/sum of RAM/storage controller/installed disks perspective.

I've got it working. I had to manually destroy all traces of vSAN on every single host node using:

1) Put hosts into maintenance mode and remove them from cluster. I was unable to disable vSAN at all in the cluster so I did it on each host node (manually via CLI below) then logged out of vCenter web client and back in to finally refresh the option to disable it on the cluster.

esxcli vsan cluster get - to check status of each host.

esxcli vsan cluster leave - to drop the host from the vSAN cluster.

esxcli vsan storage list - see the disks in the individual host disk group

esxcli vsan storage remove -d naa.id_of_magnetic_disks_here - to remove disks from the disk group individually (You can skip this by using the next command to remove the SSD only which drops every disk in the group on that host).

esxcli vsan storage remove -s naa.id_of_solid_state_disks_here - this drops the SSD and all magnetic disks associated with it in a given disk group.

After this, I was able to manually add hosts back to the cluster, exit maintenance mode, and configure disk groups. The aggregate data of the vSAN datastore is correct now and everything is functional.

One more question for those of you still reading... how do I set the storage policy such that any VM which migrates to (or is built on) the vSAN datastore will instantly pick up the default storage policy I built for vSAN?

Thanks for anyone who's been following.

View solution in original post

Reply
0 Kudos
8 Replies
zdickinson
Expert
Expert
Jump to solution

Is every host in the cluster contributing storage to the vSAN cluster?  Thank you, Zach.

Reply
0 Kudos
beeguar
Enthusiast
Enthusiast
Jump to solution

No, 5 of the 8 currently do with plans for all 8 to contribute once the solution is stable and data can be migrated to it.

Reply
0 Kudos
zdickinson
Expert
Expert
Jump to solution

I guess I would expect 5 host to "see" disk groups and then the other 3 not too, with all 8 being able to "see"/access the vSAN datastore.  Is this the case?  Thank you, Zach.

Reply
0 Kudos
beeguar
Enthusiast
Enthusiast
Jump to solution

Not the case. Very odd. All hosts can talk to each other over the vSAN network. Really not sure where i'm going wrong here, but I'm tearing down the entire vSAN cluster and rebuilding it right now. There was no data on it. Had to rip it apart host by host via the cli using esxcli vsan commands.

Just very odd to me that the disk groups were not visible under the vSAN settings. Hopefully when I rebuild it, they will be.

Reply
0 Kudos
ramakrishnak
VMware Employee
VMware Employee
Jump to solution

can you check whether you have valid license on all these hosts ? and also are all in par with the ESX build versions among them

Thanks,

Reply
0 Kudos
beeguar
Enthusiast
Enthusiast
Jump to solution

I've got it working now.

All identical 5.5 builds on the ESXi hosts. All hosts are homogeneous from a CPU/sum of RAM/storage controller/installed disks perspective.

I've got it working. I had to manually destroy all traces of vSAN on every single host node using:

1) Put hosts into maintenance mode and remove them from cluster. I was unable to disable vSAN at all in the cluster so I did it on each host node (manually via CLI below) then logged out of vCenter web client and back in to finally refresh the option to disable it on the cluster.

esxcli vsan cluster get - to check status of each host.

esxcli vsan cluster leave - to drop the host from the vSAN cluster.

esxcli vsan storage list - see the disks in the individual host disk group

esxcli vsan storage remove -d naa.id_of_magnetic_disks_here - to remove disks from the disk group individually (You can skip this by using the next command to remove the SSD only which drops every disk in the group on that host).

esxcli vsan storage remove -s naa.id_of_solid_state_disks_here - this drops the SSD and all magnetic disks associated with it in a given disk group.

After this, I was able to manually add hosts back to the cluster, exit maintenance mode, and configure disk groups. The aggregate data of the vSAN datastore is correct now and everything is functional.

One more question for those of you still reading... how do I set the storage policy such that any VM which migrates to (or is built on) the vSAN datastore will instantly pick up the default storage policy I built for vSAN?

Thanks for anyone who's been following.

Reply
0 Kudos
zdickinson
Expert
Expert
Jump to solution

"One more question for those of you still reading... how do I set the storage policy such that any VM which migrates to (or is built on) the vSAN datastore will instantly pick up the default storage policy I built for vSAN?"

I believe in 5.5 you can't change what the default policy is.  Any VM that is migrated or created w/o a policy will get the default.  That changed in v6, you can change what the default policy is.  Thank you, Zach.

beeguar
Enthusiast
Enthusiast
Jump to solution

Thanks, upgrading to 6 now so I can do that. Pain in the butt having to set it per VM!

Reply
0 Kudos