VMware Cloud Community
greenwulf
Contributor
Contributor

VSwitch configuration for Hypervisor with 6 NICs

Hello,

First of all, apologies for my non fluent English :-s

I would like to implement software iSCSI, VMNetwork and VMotion on my personnal lab. I have two physical OpenMediaVault servers and two ESXi hypervisors. Each server has 6 NICs.

For the moment I'm not decided about which sould be the best way to configure my VSwitches on ESXi. I think about two configurations :

          1) An unique VSwitch for iSCSI, VMNetwork and VMotion traffic with separated VLANs using the whole network cards

          2) 1 VSwitch for iSCSI, another for VMNetwork and a last one with VMotion using 2 NICs per virtual switch.

Each solution seems to be functionnal, but I for HA reasons I think first solution should be better, but I'm not sure that using only 1 VSwitch for all network types should be the best way

What do you think about this ?

4 Replies
RickPohl
Contributor
Contributor

Wondering pretty much the same. My set up is:

2 - SFP ports - dedicated iSCSI

2 - on board gig rj45 ports - DMZ

2-4 Port rj45 line cards as follows:

     2 on each card (4 total) for the data vlan

     1 on each card for VMotion

     1 on each card for the management vlan

Having 8 ports on one VSwitch would allow for more of the bandwidth to be available across more networks. QoS could be implemented on the physical switch if needed.

8 is assuming that you need to keep 2 dedicated for VMotion. I believe that is a VMware recommendation.

I am fairly certain you need a dedicated iSCSI vSwitch as well.

0 Kudos
daphnissov
Immortal
Immortal

Having 8 ports on one VSwitch would allow for more of the bandwidth to be available across more networks.

Doing so given your configuration would provide no benefit and would actually make things more complex.

2 on each card (4 total) for the data vlan

Not sure what "data vlan" means in this context.

Generally speaking, the two "SFP ports" (assuming 10 GbE?) for iSCSI would go on either one or two different switches. Two ports for DMZ (assuming a VM network here?) would go on their own switch. Two ports for vMotion on their own switch. Two ports for Management on their own switch. And then the "data vlan" would probably be on one or two switches.

RickPohl
Contributor
Contributor

All my connections described above are across 4 slots of a chassis switch.

DMZ and Data are both guest networks that VMs attach to.

Just curious why would 1 vSwitch vs. 2 for guess connections make things more complicated?

I am OK keeping them separate, just thinking, I have about 8 or 10 guest in the DMZ and almost 100 in the server network. Figured 2 full NICs each host for the DMZ would be "wasted" bandwidth the other guest could use.

0 Kudos
RAJ_RAJ
Expert
Expert

Hi  ,

you can Configure vmotion , mangement and VM traffic through 4 Nics Single vSwicth .

From switch side Configuration should be trunked mode with vlan tagging  so you can configure one vmkernal for management and vmotion , also multiple port groups with vlan tag

Use separate 2 vSwitch for iSCSI traffic  - SO you will have Two active path for storage .( two vmkernels)

RAJESH RADHAKRISHNAN VCA -DCV/WM/Cloud,VCP 5 - DCV/DT/CLOUD, ,VCP6-DCV, EMCISA,EMCSA,MCTS,MCPS,BCFA https://ae.linkedin.com/in/rajesh-radhakrishnan-76269335 Mark my post as "helpful" or "correct" if I've helped resolve or answered your query!
0 Kudos