Hi All ...
In the ninth part of our series, we'll go through many advanced configurations of vSphere Networking, like: Jumbo Frames, NetQueue and NIOC, as well as some troubleshooting guides.

Kindly, concentrate well as it's one of the longest parts in my series.

Credits:

  • Greg Ferro
  • George Crump
  • Lenin Singaravelu
  • Andrea Mauro
  • Vyenkatesh Deshpande
  • Frank Denneman
  • Duncan Epping

Now, Let's Start...





1. Jumbo Frames:

Jumbo Frames: Frames with payload reaches 9000 bytes. Used for high-performance and high- throughput network connections. It’s usually used in iSCSI/NAS Storage networks. The following articles by Wikipedia and Greg Ferro respectively best describe Jumbo Frames in details:

http://en.wikipedia.org/wiki/Jumbo_frame

http://etherealmind.com/ethernet-jumbo-frames-full-duplex-9000-bytes/

Types of VM vNIC supporting Jumbo Frames:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1015556

Keep in mind that, Jumbo Frames feature gives high performance as it reduces network traffic overhead caused by headers, tags, etc. It must be enabled on every component in both virtual and physical network, i.e. vSwitches, Guest OSs, VM Kernel Ports, pSwitches, etc.

Lastly, article below is an official KB from VMware about troubleshooting high disk latencies with Jumbo Frames used:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2002197



2. NetQueue:

This official pub article by VMware describes what NetQueue is:

http://pubs.vmware.com/vsphere-55/topic/com.vmware.vsphere.networking.doc/GUID-6B708D13-145F-4DDA-BFB1-39BCC7CD0897.html

Another article by George Crump describes in details what NetQueue is and how it utilizes network performance:

http://searchitchannel.techtarget.com/tip/Using-VMware-NetQueue-to-virtualize-high-bandwidth-servers

 


3. VMDirectPath:

VMDirectPath is a feature available for Network, PCI Devices and USB Controllers. It’s passing the device entirely to a VM without interfering from VMKernel itself. Used for high utilization environment as it removes the host overhead from the performance of that HW, so the VM gets all of its performance capabilities.

VMDirectpath prevents vMotion, HA, DRS, Suspend/Resume features and taking snapshots of VM with VM Directpath. It also required full memory reservation for the VM.

Below, an official blog post by Lenin Singaravelu on VMware.com describing VMDirectPath for Networking:

http://blogs.vmware.com/performance/2010/12/performance-and-use-cases-of-vmware-directpath-io-for-networking.html

The following KB article from VMware is also describing how to configure VMDirectPath:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010789

Another overview on VMware Communities Blog by Andrea Mauro (AndreTheGiant):

https://communities.vmware.com/docs/DOC-11089

For Cisco UCS (confirmation needed): It uses vNIC adapters over single pNIC board. When using VMDirectpath for networking and while migrating VM, it disconnects the vNIC and deletes it from the old host the recreates it on the new host and attaches it again to the VM on the new host.

 


4. Network IO Control (NIOC):

An advanced feature that used to control the amount of BW consumed by each type of network traffic.

This blog post by Vyenkatesh Deshpande and officially released by VMware describing NIOC in details:

http://blogs.vmware.com/vsphere/2013/01/vsphere-5-1-network-io-control-nioc-architecture-old-and-new.html

Another great article by Frank Denneman is deep diving in NIOC:

http://frankdenneman.nl/2013/01/17/a-primer-on-network-io-control/

 


5. Shares and Limits in Network IO Control (NIOC):

Shares: Used only when contention happens. It’s a percentage for each type or category of network traffic (FT, vMotion, iSCSI, etc.) of the total BW available. Again, it’s applied only when there’s a contention and by each pNIC, one by one, not on all pNICs. It’s applied to the outbound traffic only.

Share of a category= (Share value of this category/∑ Share values of all categories) %

Limits: Max. BW can be reached by a certain category of network traffic. It’s applied always and by each pNIC. Max. limit equals the max. throughput of a pNIC connected to the vDS, but the actual limit can’t be higher than the total max. throughput of all pNICs connected to an uplink group used by that category port group. (Revisit Vyenkatesh’s article.)

 

 

6. Network IO Control (NIOC) with Traffic Shaping:

Combined together, can control every aspect of network traffic. NIOC controls the traffic flowing out of Host to another (On the originating host: Ingress to vDS from VMs and Egress from vDS to pNICs {VMs->vDS->pNICs}, On the destination host: Ingress to vDS from pNICs and Egress from vDS to VMs {pNICs->vDS->VMs}). To fulfill total control on network traffic, Egress traffic from vDS to pNICs (Ingress from VMs to vDS on originating vDS and Ingress from pNICs to destination vDS) should be controlled beside Egress traffic from vDS to VMs (on destination host), i.e. no sort of control on it except Egress Traffic Shaping. So Egress Traffic Shaping is combined with NIOC to add control over Peak BW and the amount of traffic allowed during Peak duration.

 


7. QoS Priority Tagging:

Quality of Service (QoS) Tagging is well explained in the following article on Wikipedia.com:

http://en.wikipedia.org/wiki/IEEE_P802.1p

           


8. Private VLANs (PVLANs):

Private VLAN (PVLAN) is an extension to the VLAN standard, which adds a new level of segmentation in the network. It divides any VLAN into many segments to provide more granular control over the network. Many physical switches and routers now can use PVLANs in communication.

The following KB article is a nice concept overview about PVLAN:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010691

The following table is summarizing all about PVLAN:

 

Promiscuous

Community

Isolated

Connects to

All Community VLANs & one Isolated VLAN.

Promiscuous VLAN only.

Promiscuous VLAN only.

Guest inside connects to

Each other in the same VLAN and to all other guests in any secondary VLAN.

Each other in the same VLAN only and to all other guests in Promiscuous VLAN.

Only to other guests in Promiscuous VLAN. Can’t even connect to other guests in the same Isolated VLAN.

Number of VLAN

Only One (Same as Primary in vDS).

Many

Only One (No need for other one, as its guests are already isolated from each other).

Used for

Common devices like: routers, firewalls, etc.

Server farms.

Critical servers which communicate to only common devices.

When using PVLANs, make sure that physical switches and routers are well configured too unless you’re using Cisco Nexus 1000v, as it can use PVLANs without configuring it on physical switches and routers outside.

 


9. Some Port Mirroring Considerations:

When using Port Mirroring, keep in mind the following:

1-) vSphere 5.0 doesn’t support ERSPAN protocol. That’s means that source and destination VMs must reside on the same host. However, vSphere 5.1 and later supports ERSPAN, and hence, this limitation is no longer valid.

2-) When using vDS uplink group as a destination to route data packets outside to a final destination on physical LAN, each hosts connected to this vDS should have a pNIC in that uplink group, so that when the source VM is vMotion’ed from one host to another, it can find a valid destination.

 


10. NetFlow Consideration:

When using NetFlow feature in vDS, keep in mind that vSphere vDS supports only NetFlow protocol version 5. Cisco Nexus 1000 supports versions 5 & 9.

 


11. vSphere Distributed Switch Port Activity Timeout:

By default, vCenter Server reserves ports on vDS for 24 hrs after removing all port groups and management ports (VMKernel), during which that vDS can’t be deleted, removed nor remove any host from it. This default duration can be changed by changing vpxd.cfg file in vCenter Server.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010913

 


12. vSphere Distributed Switch (vDS) Operations vs. Power Outage:

One of the exciting corner cases under questioning is “How will vDS behave in case of total system power failure..??”. The following articles by Duncan Epping are answering it clearly:

http://www.yellow-bricks.com/2012/02/08/distributed-vswitches-and-vcenter-outage-whats-the-deal/

http://www.yellow-bricks.com/2012/02/23/digging-deeper-into-the-vds-construct/

 


13. vSphere Standard Switch without Physical NICs:

Another corner case under questioning is “How will a vSS without physical NICs behave..?!?!”. Here’s the answer:

1-) VMs on the same vSS and same port group can communicate with each other.

2-) VMs in different port groups within the same VLAN and on the same vSS can communicate, when VMs on different vSS’s can’t communicate with each other even if they’re in the same VLAN.

3-) VMs can’t be migrated to vSS without physical NICs until some advanced setting is edited as stated by the following KB article by VMware:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003832

This isn’t needed for vSphere 5.1 U1 (confirmation needed for vSphere 5.5).

4-) VM on vSS without physical NICs can’t communicate with VMs within same VLAN on a vDS port group with physical NIC even if all VMs reside on the same host.

 

 

14. vSphere Distributed Switch without Physical NICs:

Similar question is “How will a vDS without physical NICs behave..??!!” and the answer is the following:

1-) VMs inside a port group which reside on the same host will be able to communicate with each other when they won’t communicate with other VMs in the same port group that reside on different host.

2-) vMotion will be available for any VM.

3-) VMs on a port group without physical NIC can’t communicate with VMs within same VLAN and connected to a port group on a vSS with physical NIC even if they reside on the same host.

 


15. Virtualizing a DMZ:

Some folks consider virtualizing a DMZ network is somehow hard, but with the following technical paper from VMware, it’ll be quite clear:

   http://www.vmware.com/files/pdf/dmz_virtualization_vmware_infra_wp.pdf

 

Share the Knowledge ....


Previous: vSphere 5.x Notes & Tips - Part VIII:

Next: vSphere 5.x Notes & Tips - Part X: