VMware Cloud Community
phsteele
Contributor
Contributor
Jump to solution

Best use of 10GbE with vSphere Enterprise

We are about to deploy a new SAN connected with 10GbE (both iSCSI and NFS). The servers (5 in total with about 50 VMs) all have 2x10GbE and 2x1GbE ports. Everything will be connected using a pair of Nexus 3000 10GbE switches. We've been reading lots of information how the best way to utilize the 10GB connections. There's lots of conflicting opinions on the best way to configure the networking, but most of the documentation recommends setting up Distributed vSwitches. Unfortunately we're not running Enterprise Plus so that is not an option. We *could* upgrade, but at this point I want to avoid that route.

With that assumption is place, what is the best way to make using of 10GB using standard vSwitches? I'm fine with using the 1GB ports if necessary but it would be great if we could put everything on just the 10GB ports.

Recommendations/suggestions greatly appreciated!

0 Kudos
1 Solution

Accepted Solutions
sigreaves
Contributor
Contributor
Jump to solution

As you have probably discovered already your best bet is to use Enterprise Plus licensing so that you can take advantage of Network I/O control to split up the 10Gb NICs and share them with iSCSI, NFS, vMotion, virtual machine and management.

Given your other options I would create two switches.  One for Management and vMotion with the 2x1Gb uplinks.  Configure this active/standby with management active on one NIC and standby on the other and the opposite for vMotion so that each 1Gb NIC is dedicated to a single task but still has the option to fail over should it be required.

With the second switch add both 10Gb uplinks and create two iSCSI vmkernel ports.  Set each uplink active on one portgroup and unused on the other one and then enable iSCSI binding.  See the pic below for an example.

iscsi portgroups.PNG

Now create additional portgroups for NFS and virutal machine traffic and assign them to this same switch.  Set NFS uplinks as active/standby and virtual machine to route based on port id.  Make sure to give each traffic type a different VLAN ID.

If you are familiar with QoS on the Cisco you can prioritise iSCSI, NFS and virtual machine traffic accordingly so as to make sure that iSCSI traffic is always available.  You can take this a step further by setting transmit rate limits on the virtual machine portgroup traffic so that it doesn't consume all 10Gb of the uplink.

vm portgroup.PNG

I suggest you take a read of this document if you are not familiar with iSCSI configuration.

https://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf

Simon Greaves http://www.simongreaves.co.uk

View solution in original post

0 Kudos
2 Replies
sigreaves
Contributor
Contributor
Jump to solution

As you have probably discovered already your best bet is to use Enterprise Plus licensing so that you can take advantage of Network I/O control to split up the 10Gb NICs and share them with iSCSI, NFS, vMotion, virtual machine and management.

Given your other options I would create two switches.  One for Management and vMotion with the 2x1Gb uplinks.  Configure this active/standby with management active on one NIC and standby on the other and the opposite for vMotion so that each 1Gb NIC is dedicated to a single task but still has the option to fail over should it be required.

With the second switch add both 10Gb uplinks and create two iSCSI vmkernel ports.  Set each uplink active on one portgroup and unused on the other one and then enable iSCSI binding.  See the pic below for an example.

iscsi portgroups.PNG

Now create additional portgroups for NFS and virutal machine traffic and assign them to this same switch.  Set NFS uplinks as active/standby and virtual machine to route based on port id.  Make sure to give each traffic type a different VLAN ID.

If you are familiar with QoS on the Cisco you can prioritise iSCSI, NFS and virtual machine traffic accordingly so as to make sure that iSCSI traffic is always available.  You can take this a step further by setting transmit rate limits on the virtual machine portgroup traffic so that it doesn't consume all 10Gb of the uplink.

vm portgroup.PNG

I suggest you take a read of this document if you are not familiar with iSCSI configuration.

https://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf

Simon Greaves http://www.simongreaves.co.uk
0 Kudos
phsteele
Contributor
Contributor
Jump to solution

Thanks for the suggestion. I figured I would end up with something like this. I still have some testing to do but this will help a lot.

0 Kudos