Hi all, I've recently adopted a 2 node VSAN cluster that has an issue whereby when a vmotion is initiated, from one host to another, the host seems to send a burst of traffic via the managment NIC, ...
See more...
Hi all, I've recently adopted a 2 node VSAN cluster that has an issue whereby when a vmotion is initiated, from one host to another, the host seems to send a burst of traffic via the managment NIC, which is forcing the switch to close the port down....and I lose the host in vCenter obviously. I've created a standard switch for the management traffic and i've created a distributed switch for the vmotion/vsan traffic. I've also configured the traffic types/services for each vmkernal adapter correcty. My DVUplinks are configured correctly for my Distributed Switch(as per the physical ports on my hosts) and my VMNics for my Standard Switch are again configured correctly as per the physical setup/NICS on the host/s. I'm thinking that the Gateway address for my VSAN/vMotion kernal adapters is possibly the issue... and i've tried to set it to not have a gateway( as these are directly connected hosts and only need to have connectivity to themselves for vsan/vmotion traffic etc), but it wont allow me to do that. Does anyone have any ideas or can at least point me in a particular likely direction if its not what i'm thinking above? I'm only using the default TCP/IP Stack as it stands...I'm guessing best practise is to separate out the vMotion and VSAN traffic using separate TCP/IP Stacks - I guess this is one way to specify the gateway addresses for the vMotion and VSAN traffic, but would I just leave it blank? I was wondering if this article is perhaps my issue? Any thoughts ( https://www.yellow-bricks.com/2017/11/22/isolation-address-2-node-direct-connect-vsan-environment/ ) Cheers in advance, Pete