vSphere vNetwork

 View Only
  • 1.  ESXi 5 dvswitch and MTU settings

    Posted Mar 15, 2012 01:24 PM

    Edit:  i'm going to post the "How To" config info here for anyone else looking for a quick guide.  this was super simple but not obvious.

    after you add a host to the DVSwitch, go to Configuration, Networking, vSphere Distributed Switch, Manage Virtual Adapters.

    here you will add virtual Nics which you can set the MTU for each of your port groups.  Hope this helps someone else.

    I have a question regarding MTU settings.

    In my scenerio i have only 1 2 port 10gig card in my host.  i have each port as a dvuplink to seperate switches.

    my question is, when you set the MAX MTU to 9000 to support jumbo frames for ISCSI dvPortGroups, does that change all your other traffic in VM Network PortGroups to 9000 as well, cause allot of my stand alone equipment which is still 10/100/1000 is MTU 1500.

    so i guess what i'm really asking is,

    a guest OS/VM running on this dvswitch, in a dvportgroup, could it's MTU still be 1500? so that it could talk to stand alone equipment on the fabric.

    i know to support Jumbo frames for ISCSI it needs to be supported END to END, and mine does from esxi to switch to san.

    so do i just need a dedicated dvswitch for MTU 9000 and a dedicated dvswitch for MTU 1500 or can the 1500 traffic exist within the 9000 switch?

    I'm fairly new to Jumbo Frames, so i am trying to understand how my design "Should" be.

    I dont have the budget to add more 10gig nics to each host.  I know that sounds bad but it's where we are. plus it would just be a waste of throughput... i just dont need that much throughput.  2 10gig uplinks already more than meets are needs.

    I've attached a visio layout of how i wanted to lay this out, but once i started thinking MTU through, i was like uh.....  that wont work, will it?



  • 2.  RE: ESXi 5 dvswitch and MTU settings

    Posted Mar 15, 2012 02:03 PM

    If you have to have all traffic on the same switches, in example storage and VM traffic, then enabling Jumbo Frames will be a bad idea. Like you said, the Jumbo Frames need to be enabled end to end, this means the vSwitches, the VMKernels, physical switches and your storage NICs ... So if working with a small budget and can't add more NICs and physical switches, just keep everything on default MTU and separate traffic with VLANs where possible.

    I hope I understood your question correctly :smileyhappy:



  • 3.  RE: ESXi 5 dvswitch and MTU settings
    Best Answer

    Posted Mar 15, 2012 02:22 PM

    Robert Samples wrote:

    my question is, when you set the MAX MTU to 9000 to support jumbo frames for ISCSI dvPortGroups, does that change all your other traffic in VM Network PortGroups to 9000 as well, cause allot of my stand alone equipment which is still 10/100/1000 is MTU 1500.

    so i guess what i'm really asking is,

    a guest OS/VM running on this dvswitch, in a dvportgroup, could it's MTU still be 1500? so that it could talk to stand alone equipment on the fabric.

    Enabling a large MTU on the dvswitch will not do any damage to your VMs. It is easy to get confused regarding this, but the MTU setting on the switch is just that it will allow larger frames than 1518 bytes. The changed MTU size on the switch will be invisible to your VMs and they will all continue to use default 1500 MTU without problems.

    That is, you can have the VMkernel iSCSI adapters with 9000 and VMs with 1500 on the same dvswitch.



  • 4.  RE: ESXi 5 dvswitch and MTU settings

    Posted Mar 15, 2012 05:17 PM

    thank you guys, that was what i was looking for. 

    I truelly think i will be ok with 9000 MTU on the VLAN/vmk Prot Groups for iscsi end to end.  it was the VM Network VLAN 0 i was worried about, with legacy/ non vm equipment.



  • 5.  RE: ESXi 5 dvswitch and MTU settings

    Posted Mar 15, 2012 06:17 PM

    Robert Samples wrote:

    I truelly think i will be ok with 9000 MTU on the VLAN/vmk Prot Groups for iscsi end to end.  it was the VM Network VLAN 0 i was worried about, with legacy/ non vm equipment.

    That will not be a problem either. It is part of the TCP session three-way handshake to negotiate the MTU (or more correct the MSS) so it will work even when communicating with some random older device.