VMware Cloud Community
jveerd
Contributor
Contributor
Jump to solution

vSAN 2-node direct connect dvswitch design

Can someone elaborate on the dvswitch design for the vSAN data traffic in a 2-node direct connect scenario? The goal is to make the dvswitch design as highly available as possible using two direct connected cables between the hosts. For example, which load balancing policy should one use on the distributed portgroup? Use Active and Standy uplink?

Any help is highly appreciated.

Tags (2)
1 Solution

Accepted Solutions
virtualDD
Enthusiast
Enthusiast
Jump to solution

Hi. I've recently done this at a customer's site. The goal was to replace a existing ROBO 6.2 Environment and Update it to 6.5 and change networking to switchless.

What we've done:

We used standard switches because the customer was not comfortable with dvSwitches. So the loadbalancing was set to portID.

We had two direct 10 Gb/s connections from two different cards. We created two vmkernel interfaces: one for vSAN and one for vMotion. vSAN uses the one uplink active and has the other on standby and vice-versa for vmotion.

witness traffic goes through the management vmkernel interface. You'll have to tag it as vsan witness traffic (only possible on the cli as far as I know).

the vm networking was on the next switch and management was on its own switch as well.

If you go with a dvSwitch I'd say loadbalancing doesn't matter. You still want to have only one adapter active for vSAN and the other for vMotion so there isn't really any real loadbalancing going on.

View solution in original post

2 Replies
virtualDD
Enthusiast
Enthusiast
Jump to solution

Hi. I've recently done this at a customer's site. The goal was to replace a existing ROBO 6.2 Environment and Update it to 6.5 and change networking to switchless.

What we've done:

We used standard switches because the customer was not comfortable with dvSwitches. So the loadbalancing was set to portID.

We had two direct 10 Gb/s connections from two different cards. We created two vmkernel interfaces: one for vSAN and one for vMotion. vSAN uses the one uplink active and has the other on standby and vice-versa for vmotion.

witness traffic goes through the management vmkernel interface. You'll have to tag it as vsan witness traffic (only possible on the cli as far as I know).

the vm networking was on the next switch and management was on its own switch as well.

If you go with a dvSwitch I'd say loadbalancing doesn't matter. You still want to have only one adapter active for vSAN and the other for vMotion so there isn't really any real loadbalancing going on.

jveerd
Contributor
Contributor
Jump to solution

Thanks for the confirmation.

0 Kudos