VMware Networking Community
Shamyy
Enthusiast
Enthusiast
Jump to solution

connectivity between two clusters not on same VDS

Hello community,

please need your kind support in this concern :

if i have two clusters one for compute and one for control and each cluster have his own distributed switch ( compute VDS and control VDS)  ,

should i can reach the vteps of control cluster from compute cluster when run this command

compute $ ping ++netstack=vxlan 192.168.4.19 -s 1492 -d

where 192.168.4.19 is the vtep of control cluster .

and the vms on two clusters should communicate normally ?

Thanks,

shmay

0 Kudos
1 Solution

Accepted Solutions
lhoffer
VMware Employee
VMware Employee
Jump to solution

Yes, you can span multiple VDS instances with a single logical switch and should be able to ping all VTEPs.  This is described in the transport zone section on page 24 of the VMware® NSX for vSphere Network Virtualization Design Guide ver 3.0

View solution in original post

0 Kudos
8 Replies
lhoffer
VMware Employee
VMware Employee
Jump to solution

Yes, you can span multiple VDS instances with a single logical switch and should be able to ping all VTEPs.  This is described in the transport zone section on page 24 of the VMware® NSX for vSphere Network Virtualization Design Guide ver 3.0

0 Kudos
Shamyy
Enthusiast
Enthusiast
Jump to solution

so why i need to separate compute and control clusters to two separate VDS ?

or why it is recommended to separate two clusters in Two VDS ?

Thanks,

Shamy

0 Kudos
lhoffer
VMware Employee
VMware Employee
Jump to solution

You don't explicitly have to, although it's pretty common to see management clusters where the NSX Manager and Controllers may reside that are part of a separate vCenter than the compute clusters.  Using a separate VDS for that also provides more flexibility to separate administrative domains and provide additional scale in environments where that's needed.  On the other hand, in a small deployment with a collapsed cluster, there's nothing necessarily wrong with using a single VDS for everything.

0 Kudos
Shamyy
Enthusiast
Enthusiast
Jump to solution

thanks ihoffer so much , but how can i determine if my environment is small deployment or large ?

0 Kudos
lhoffer
VMware Employee
VMware Employee
Jump to solution

In general, an environment with 10 hosts or less is considered "small" in this context, but I'd also encourage you to read the "Cluster Configurations & Sizing" section starting on page 143 of the previously mentioned design guide as it has a few pages worth of info around items that may affect the standard guidance.

0 Kudos
Shamyy
Enthusiast
Enthusiast
Jump to solution

Thanks alot ihoffer ,

when we create a bridge to connect with a physical server on Which VDS should i create the distributed port group that will be used in bridging , on compute VDS or on control VDS ?

Thanks,

Shamy

0 Kudos
iforbes
Hot Shot
Hot Shot
Jump to solution

One thing I came across is if you decide to use a single vDS across compute and management clusters, you better ensure that all of the physical uplinks in that vDS carry the vxlan (vtep) vlan. You cannot specifically override which physical uplinks get vxlan. VXLAN will arbitrarily use any physical uplink on the vDS. So, if for example you have dedicated uplinks for hypervisor management, iscsi connectivity, vMotion, you need to ensure the vxlan vlan exists on those uplinks.

That's easy enough with vlan trunking (802.1q), but if you want physical separation then use a second vDS with 2 (or more) uplinks dedicated to vxlan (vtep).

0 Kudos
lhoffer
VMware Employee
VMware Employee
Jump to solution

Main concern there is that you ensure that the VDS you're creating the port group on exists on the host where the DLR control VM for your bridge instance will live (since that hosts kernel will actually be where the bridge happens).  If both VDSs are available on said host, then compute is probably the right choice assuming that you don't want your physical server in your management VLAN (from an NSX perspective though, it doesn't really matter).

0 Kudos