Reply to Message

View discussion in a popup

Replying to:
MKguy
Virtuoso
Virtuoso

A config with 2 physical 10G NICs like that is fully supported, although not optimal in some cases.

As the general best practice which you are probably aware of already, seperate iSCSI and VMotion traffic on isolated, non-routed VLANs and put ESX(i) management interfaces and VMs on their respective own VLAN too.

If you have something like HP blades with HP Virtual Connect Flex-10, you can split the phyiscal NICs into 2x4 "sub-NICs" with their own custom speeds too. This would allow you to handle ESX-side  networking as if you had 2 quadport NICs.

If you have Enterprise plus licensing with dvSwitches, you can also use the new 4.1 feature Network IO Control to soft-control bandwidth of the specific interfaces like iSCSI, VMotion etc:

http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf

http://geeksilver.wordpress.com/2010/07/27/vmware-vsphere-4-1-network-io-control-netioc-understandin...

If you can do neither of those and use standard vSwitches, you could also consider setting a preferred active uplink on the iSCSI-vmkernel interface with the other uplink being standby only. Do the same vice-versa for the other port groups.

This way, unless one Uplink fails, you will always have a guaranteed, dedicated 10G connection for iSCSI regardless of any ongoing VMotion and/or VM traffic. You can't do proper ESX-side iSCSI multipathing with this configuration though, so you are bound to 10G for iSCSI at any time (which should suffice in most cases).

-- http://alpacapowered.wordpress.com