CAMARAMAMY
Contributor
Contributor

VSAN Cluster

The migration to a new DNS name and IP address was disrupted due to an issue. There were originally four nodes in the VSAN cluster, with the vCenter server hosted on ESXi host 4. After rebooting host 4, it became isolated from the rest of the cluster, which consisted of the remaining three nodes that were part of a VSAN cluster led by the master ESXi host 2. Sadly, host 4 also became the master of its own VSAN cluster. Now I have lost the control of Vcenter server 


I really need a help 

0 Kudos
1 Reply
TheBobkin
VMware Employee
VMware Employee

@CAMARAMAMY, Your data is likely inaccessible as your cluster is likely network-partitioned and the nodes are isolated (or at least one is) - there are basically 2 aspects (ignoring corner-cases for now) that govern vSAN nodes being able to communicate with one another:

 

1. They have a vsan-tagged vmk interface that can communicate with the other nodes vsan-tagged vmk interfaces, at the MTU these vmk have set (and assuming the remainder of the network stack end-to-end supports this) e.g. they are in the same VLAN and subnet and no required ports hindered (most important ones being UDP 12321 and TCP 2233).
2. All nodes have unicastagent entries informing them of the correct and current IPs and UUIDs to communicate with the other nodes.

 

All of the information that is required to troubleshoot and test all of the above (without vCenter available) can be retrieved via the following:
# esxcfg-nics -l
# esxcfg-vmknic -l
# esxcfg-vswitch -l
# esxcli vsan network list
# esxcli vsan cluster unicastagent list
# cmmds-tool find -t HOSTNAME

 

If unicastagent entries on any node are incomplete or incorrect these can be rectified using the above information (from each node) via the steps here:
https://kb.vmware.com/s/article/2150303

 

But before doing so, do of course validate there isn't some network communication issue between the configured IPs (e.g. no vmkping, MTU mismatch, no correctly tagged vmk etc.).

0 Kudos