StevenU
Contributor
Contributor

New vSAN Network Setup

I've been racking my brain trying to get this to work.

I have three hosts. Below are the details per host.

Host Info

ESXi 6.7U3

One 1GB ethernet uplink

One 10 GB fiber uplink

Physical Network Switch

VLAN 1: Management (Using the 1GB NIC)

VLAN 2: vSAN (Using 10GB NIC)

Jumbo frames have been enabled on the ports connected to the 10GB NICs.

I've tagged both VLANs on the all of the ports.

Install Issues

I'm installing vCSA and creating the vSAN cluster through the vCSA installer.  Only the 10GB NICs were added.  I left the 1GB NICs that are on the management VLAN on the ESXi host standard switch.  I've set the vDS to the vSAN VLAN. After the install has been completed I'm getting several warnings.

vSAN Cluster Partition (I tried removing and manually adding each host to the same cluster but that did not work.)

vSAN: Basic (unicast) connectivity check (Details of this error show failed pings from the management network IPs and not the vSAN IPs.)

vSAN: MTU check (ping with large packet size) (Details of this error show failed pings from the management network IPs and not the vSAN IPs.)

vSAN object health

As I'm researching these, they all seem to be pointing back to a networking issue that I haven't been able to figure out.  I've gone back into the vDS and set all vmk for each host in the vDS MTUs are set to 9000.

All of the standard switches on the ESXi hosts which have the 1GB NIC uplink still have connectivity.  The vCSA VM is using the ESXi host standard switch so I'm able to connect to it.

Any thoughts on where I went wrong?

Thanks in advance.

2 Replies
seamusobr1
Enthusiast
Enthusiast

why only 1 nic for each

lucasbernadsky
Hot Shot
Hot Shot

Hi, first of all, why not use DVS for management, vMotion and vSAN? So much easier to manage. I would assign a different VMK for each service. And in the dvPortgroup teaming policy, add the correct vmnic as active and the other one in standby mode (Configure NIC Teaming, Failover, and Load Balancing on a Distributed Port Group or Distributed Port).  Also, use NIOC to prioritize vSAN traffic over management.

Check the DVS MTU configuration. Try vmkping from all vsan vmkernels to all vsan vmkernels with 9000 packets. Try disabling TSO and then try sending 9124 MTU packets. Some switches need that MTU overhead like Dell PowerConnect (Enable or Disable TSO on an ESXi Host ), and please check the switch physical ports, if you have dropped packets or another problem.

Check if vsan VMkernel has only that service enabled, and make sure that vmotion and management only have their services enabled.

A helpful test will be to run the vSAN Network Proactive Test (Proactive Tests )

For the vSAN Cluster Partition alert, please take a look at these articles:

How to Fix VSAN Cluster Partition in Nested VSAN LAB

Fix wrong vSAN Cluster partitions > ProVirtualzone - Virtual Infrastructures

Please let me know! Regards

0 Kudos