7 Replies Latest reply on Oct 14, 2010 6:35 AM by cebomholt

    Jumbo frame support within a Windows VM?

    MaxStr Enthusiast

      I have jumbo frames set up for my iSCSI connections on my SAN. The switch, ESX servers, and vCenter have it enabled for iSCSI traffic. However, I noticed that when I go into a Windows VM and look at the NIC properties, jumbo frames is disabled. Does that need to be enabled, or does ESX take care of that behind the scenes for any traffic going to the SAN via iSCSI?

       

       

       

       

      Note- the WIndows VM's aren't using iSCSI, but the volumes are located on a iSCSI LUN.

        • 1. Re: Jumbo frame support within a Windows VM?
          IRIX201110141 Master

           

          No, you doesnt need to enable it.

           

           

          Some ppl use iSCSI directly into the Guest or acting as a iSCSI Target from inside a VM. So these ppl have a need for jumbo frames and this is why they are now supportet when using the vmxnet3.

           

           

          Regards

          Joerg

           

           

           

          'Remember if you found this or others answers helpful do not forget to award points by marking an answer as helpful or correct'

           

           

          • 2. Re: Jumbo frame support within a Windows VM?
            MaxStr Enthusiast

            So if I have a VM that does use iSCSI for multiple drives (like E: and F: are iSCSI), would it be ideal to add a second virtual NIC and assign it to the iSCSI subnet, and enable jumbo frames? Is it possible to dedicate iSCSI traffic through one NIC and regular traffic trhough another NIC?

            • 3. Re: Jumbo frame support within a Windows VM?
              IRIX201110141 Master

              Seperating volumes like d: and e:  into its own virtual disks (vmdks) is always a best practice.

               

              People using iSCSI direcltly of various reasons:

               

               

               

               

               

              • passing the 2TB  disk size for large file servers

              • using a MPIO solution from inside the VM in times where ESX havent a MPIO solution

              • Snapshot on the SAN

              • EQL customers may use the host integration toolkit (HIT)  for supporting special applications like ms sql or ms exchange to snapshoting, replication.

               

              Yes these people add another vNIC(s) to the VM config which used the same IP Subnet as their iSCSI SAN. The vNIC is connectet to a extra Portgroup which may use the same pNICS as the ESX for iSCSI.

               

               

               

              Please.... stay at  the VMFS/VMDK way as long you doesnt have a special reason to change that. You can snapshot a complete VM, cloning, svmotion and all that magic without the hazzle.

               

               

               

              Regards

              Joerg

               

               

               

              'Remember if you found this or others answers helpful do not forget to award points by marking an answer as helpful or correct'

              • 4. Re: Jumbo frame support within a Windows VM?
                MaxStr Enthusiast

                Actually I think you're talking about adding vnics to the ESX server. The ESX hosts already have vnics and vswitches dedicated for iSCSI. (That's why I was worried it was redundant, because the hosts already have an iSCSI subnet)

                 

                I am talking about adding another VMXNET virtual nic to my Windows VM, that is dedicated to iSCSI traffic only. I'm looking to do this because this will be a SQL server, and I am researching whether I should seperate the database and log files onto seperate LUNs. I'd like to make C: a regular VMDK, and make E and F seperate iSCSI LUNs.

                 

                 

                This way I can configure the LUN's to be optimal for SQL  (see my post here http://communities.vmware.com/message/1628306

                • 5. Re: Jumbo frame support within a Windows VM?
                  IRIX201110141 Master

                   

                  No, iam not talking about adding nics into the ESX hosts. I have fully understand your question and yes you have to add a 2nd or 3rd vNIC to the Guest, using the MS iSCSI Initiator, configure   MPIO and so on.

                   

                  But please.... think about if this is realy necessary in your case. What do you think is the benefit if youre using this kind of setup?

                   

                   

                  Regards

                  Joerg

                   

                   

                  'Remember if you found this or others answers helpful do not forget to award points by marking an answer as helpful or correct'

                   

                   

                  • 6. Re: Jumbo frame support within a Windows VM?
                    MaxStr Enthusiast

                    I was thinking that since I can make one LUN optimized for latency (logs) and another LUN optimized for data, it would improve performance. Also, I figured that if I connect the LUNs directly to the VM, it would bypass the ESX server's iSCSI vmkernel/vswitch and directly go to the SAN. This should reduce overhead by allowing Windows direct access via iSCSI.

                     

                    However I may be mistaken regarding the iSCSI vswitch... if a VM has an iSCSI connection on a separate VLAN, does it still connect via the iSCSI vswitch on the same VLAN? Or does it directly connect to the gateway?

                     

                     

                    And as for MPIO, I have multipathing set up on the ESX host, so I would need to set it up on the Windows server, unless of course it uses the same connection anyway?

                    • 7. Re: Jumbo frame support within a Windows VM?
                      cebomholt Enthusiast

                       

                      I've done this in the past by making a VM port group on existing iscsi vswitch that is being used by the vmkernel ports. This keeps all of your storage traffic together, regardless of vmkernel or guest initiator. As for performance and reducing overhead, I would tend to agree with Joerg. It really depends a lot on if what you're trying to squeeze out of your hardware. VMFS provides a lot of nice features, and i've found that the performance pickup from doing this typically isn't worth the loss of VMFS.