VMware Cloud Community
GreyhoundHH
Enthusiast
Enthusiast

vSphere / ESX 4.1 and 10Gbit NIC

Hi,

on of our customer's is planning a new environment with 10Gbit Ethernet. A vSphere cluster based on HP hardware will be setup and managed by us. Due to costs it's planned to reduce the 10Gbit Ethernet ports

My question is, if it's a good idea to run iSCSI, vMotion and the customer-LAN on a trunk of two 10Gbit ports per host? Is this supported or is it recommended to use more NICs?

Any help appreciated.

Kind regards

0 Kudos
5 Replies
MKguy
Virtuoso
Virtuoso

A config with 2 physical 10G NICs like that is fully supported, although not optimal in some cases.

As the general best practice which you are probably aware of already, seperate iSCSI and VMotion traffic on isolated, non-routed VLANs and put ESX(i) management interfaces and VMs on their respective own VLAN too.

If you have something like HP blades with HP Virtual Connect Flex-10, you can split the phyiscal NICs into 2x4 "sub-NICs" with their own custom speeds too. This would allow you to handle ESX-side  networking as if you had 2 quadport NICs.

If you have Enterprise plus licensing with dvSwitches, you can also use the new 4.1 feature Network IO Control to soft-control bandwidth of the specific interfaces like iSCSI, VMotion etc:

http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf

http://geeksilver.wordpress.com/2010/07/27/vmware-vsphere-4-1-network-io-control-netioc-understandin...

If you can do neither of those and use standard vSwitches, you could also consider setting a preferred active uplink on the iSCSI-vmkernel interface with the other uplink being standby only. Do the same vice-versa for the other port groups.

This way, unless one Uplink fails, you will always have a guaranteed, dedicated 10G connection for iSCSI regardless of any ongoing VMotion and/or VM traffic. You can't do proper ESX-side iSCSI multipathing with this configuration though, so you are bound to 10G for iSCSI at any time (which should suffice in most cases).

-- http://alpacapowered.wordpress.com
MauroBonder
VMware Employee
VMware Employee

if you have this source of adapter, its a good solutions nic teaming with two adapters that provide redundacy. but how said, iscsi with dedicated adapter is the best practices,

*Please, don't forget the awarding points for "helpful" and/or "correct" answers. *Por favor, não esqueça de atribuir os pontos se a resposta foi útil ou resolveu o problema.* Thank you/Obrigado
0 Kudos
MKguy
Virtuoso
Virtuoso

You fully retain redundancy with the above example too, it's just that you designate one uplink as a dedicated standby NIC, so only in the event of an uplink failure, sharing of one 10G link for all networks occurs.

-- http://alpacapowered.wordpress.com
GreyhoundHH
Enthusiast
Enthusiast

OK, thanks for your input, that helped a lot. My main concern was if a setup like this is supported by VMware.

I think we will only use the Enterprise version, so we would have to go with Standard vSwitches. Using the described setup with a different active preferred uplink for the portgroups seems to be a good idea. Just to sum it up, you would recommend the following:

Portgroup: iSCSI -> preferred active uplink NIC1, standby NIC2

Portgroup: vMotion-> preferred active uplink NIC2, standby NIC1

Portgroup: customer-LAN-> preferred active uplink NIC2, standby NIC1

Kind regards

0 Kudos
MKguy
Virtuoso
Virtuoso

Yes, that's pretty much what I would recommend in your case.

Don't forget about Service Console/vmkernel management and put that primariliy on NIC2 too.

Also, be aware that VMotion can really chew up a lot of bandwidth, so if your VMs are heavily utilization the network, performance could be affected negatively during VMotions. But 10Gbit/s still is a lot, so I doubt it will be a noticeable unless DRS constantly initiates a lot of VMotions on your cluster.

-- http://alpacapowered.wordpress.com
0 Kudos