VMware Cloud Community
Sly
Contributor
Contributor

Where to best utilize VMDq / NetQueue enabled NICs

Here is my set up:

We have HP DL380 hosts each with 4 Broadcom NC382i NICs(vmnic0-vmnic3) and 4 Intel 82576-based NICs (vmnic4-vmnic7).  I have netqueue configured on the 4 Intel NICs using the recently released asyncronous driver.

Questions:

Given that I have the following networks to support for each host, where would you utilize the NICs with netqueue enabled?   Would you combine any of the first three networks on the same vswitch?  Would you set up any adapter teams / etherchannels for any of the vswitches?

  • Virtual Machine Network on VLAN1 on physical switch 1*
  • Service Console Network on VLAN1 on physical switch 1*
  • VMotion isolated on a VLAN2 on physical switch 1*
  • iSCSI-1 isolated on physical switch 2*
  • iSCSI-2 isolated on physical switch 3*

*physical switch 1 = 2 stacked Cisco 3750-24G, physical switches 2 and 3 are each HP 2910al-24G switches

I do not have a very demanding VM load sitting on top of this cluster, but that will change in 2011.  That being said, currently the only time I am seeing RX/TX statistics for more than address 0 (queue depth of 1), is with the iSCSI connection, which is limited to a queue depth of 4 since I have jumbo frames enabled (MTU=9000).  For vmnics with the MTU=1500, I can set the queue depth to 8.

Reply
0 Kudos
1 Reply
dickieblack
Contributor
Contributor

Sorry to resurrect an old thread, but I haven't been able to find any information on this anywhere else yet. I have a very similar environment and I think this question still stands. On which networks is it recommended to deploy pNICs with Netqueue enabled?

We a have a small, fairly low utilisation environment with oversubscription or vCPUs to pCores - all but a few guests at a time are idle.

Hosts are ESXi 4.1, utilising dual or quad onboard Broadcom 5708 LOMs and dual or quad Intel 82576 based NICs respectively. Storage is via iSCSI over a dedicated, physically separate LAN with jumbo frames enabled and VLAN tagging. MPIO is setup and using round-robin.

I have read information to suggest that Broadcom LOMs are not great at handling iSCSI traffic (without iSCSI offload enabled/licensed/purchased) and that the Intel cards are a better bet, so currently storage traffic is going via the Intel pNICs. However, I have just seen a presentation from Intel on the advantages of VMDq/Netqueue in terms of CPU resources and interrupts. Since my hosts are quite old I suspect decreasing the pCPU requirements of the guest network could be of benefit. Does anyone here have any recommendations or advice?

Thanks,

Richard

Reply
0 Kudos