VMware Cloud Community
kmcd03
Contributor
Contributor
Jump to solution

Is mgmt and vSAN configured on vmk0 of the witness a supported configuration for a vSphere 6.6 Stretched Cluster?

I've been trying to deploy the witness for a 12-node Stretched vSAN 6.6 cluster and having some L3 problems getting the vSAN vmk interfaces used by the data nodes to connect to the witness host vSAN vmk interface.  Data nodes are at two different data centers connected by stretched L2.  And witness is at a 3rd/separate location and using the VM appliance as a witness host. 

I was able to get the witness to work by having vmk0 on the witness ESXi host do both Management and vSAN traffic.  (after I un-checked the box for vSAN from vmk1 on the witness ESXi host)

I added a static route on my data nodes so their vSAN vmk interfaces (vmk1) can ping to the witness IP for Management and now vSAN. And from the witness I can ping the IPs of the data nodes vSAN vmk interfaces.

I was able to successfully configure the stretched cluster and add the witness.  The healthchecks are green.

My question is if having both Management *and* vSAN traffic on the witness vmk0 is a supported configuration.  There is a link on Storagehub for Witness Traffic Separation (WTS) that describes this configuration.  But isn't clear to me if this is supported for Stretched Cluster or only for 2 Node Direct Connect cluster.

Thanks!

Reply
0 Kudos
1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Hello kmcd03​,

Welcome to Communities.

Data traffic to the Witness is fairly minimal and shouldn't interfere/contend with Management traffic and thus, yes this is supported in vSAN 6.7

"If both the Management (vmk0) and witnessPg (vmk1) interfaces have to be on the same IP segment, the witnessPg VMkernel interface must have “vSAN Traffic” untagged and the Management VMkernel (vmk0) must have “vSAN Traffic” tagged. Tagging vmk0 (only) for “vSAN Traffic” in this situation is fully supported."

https://blogs.vmware.com/virtualblocks/2018/05/16/witness-host-traffic-tagging/"

Bob

Edit: Versions

View solution in original post

Reply
0 Kudos
4 Replies
TheBobkin
Champion
Champion
Jump to solution

Hello kmcd03​,

Welcome to Communities.

Data traffic to the Witness is fairly minimal and shouldn't interfere/contend with Management traffic and thus, yes this is supported in vSAN 6.7

"If both the Management (vmk0) and witnessPg (vmk1) interfaces have to be on the same IP segment, the witnessPg VMkernel interface must have “vSAN Traffic” untagged and the Management VMkernel (vmk0) must have “vSAN Traffic” tagged. Tagging vmk0 (only) for “vSAN Traffic” in this situation is fully supported."

https://blogs.vmware.com/virtualblocks/2018/05/16/witness-host-traffic-tagging/"

Bob

Edit: Versions

Reply
0 Kudos
kmcd03
Contributor
Contributor
Jump to solution

Thanks for reply.  For me some of the confusion is the documentation, like Storagehub or config guide, isn't always clear when it changes context between 2 Node and Stretched Cluster.

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello kmcd03​,

Sincere apologies but it would appear I didn't fully read your question - while Management + Witness traffic on same vmk is supported, WTS itself is only officially supported in Stretched-clusters (e.g. 2+2+1 or bigger) in vSAN  6.7.

Can you provide more details of the issues you are having with L3 traffic to Witness?

L3 is preferable for this which is outlined in detail on page 9 here:

https://storagehub.vmware.com/export_to_pdf/vmware-r-vsan-tm-network-design

Bob

Reply
0 Kudos
kmcd03
Contributor
Contributor
Jump to solution

We have a pair of switches for data node/ESXi host management and VM traffic.  A second pair of switches were added for vSAN traffic to the ESXi hosts.  A 10 GB circuit between the data centers is also on this second stack.  The data nodes are using L2 for vSAN.  We're using a /23 for vSAN vmk interfaces with the bottom /24 at one DC and top /24 at other.  An route was configured for a switch at primary data center.  Hosts at one site could connect to the witness, but witness couldn't connect to hosts.  We had to create a route for the other data center and also add static host routes (/32) for each host for traffic to work. I opened ticket with GSS to confirm if on the Witness host could consolidate Management and Witness traffic on to vmk0.  The Witness added with no errors and also passed health checks.  However discovered unintended side effect is access to KMS (coincidentally same site and vlan/IP subnet as the Witness) broke.  It looks like the traffic for the encryption KMS is traversing the vSAN vmk of the data hosts. 

Reply
0 Kudos