zenking
Hot Shot
Hot Shot

Switches - best bang for the buck?

Jump to solution

We have 2 Dell 5424 switches on our vsphere 5.0 setup. They're limited to 9000 mtu for jumbo frames and I'd like to find a model that can handle up to 9216. The Dell 6xxx series starts at $1770. Can anyone recommend something less expensive that will do what I need?

Thanks.

VMWare Environment: vSphere 5.0, EQ PS6000 SAN, Dell R710 Hosts, quad Bcom NICs built in, quad Intel nics added, dedicated Dell 5424 switches w/ separate vlans for vmotion and iscsi.
0 Kudos
1 Solution

Accepted Solutions
rickardnobel
Champion
Champion

zenking wrote:

Our Win2k8 R2 VMs have the e1000 vnics, which appear to set 9014 MTU without any way to adjust. The vswitches and host nics are set to 9000, but my understanding is that they allow for the header overhead. The 5424 is stuck at 9000 and does not allow for the overhead. vmkping tests top out at 8972 (since the header is added), so the switch has to be the bottleneck unless I'm wrong about the vswitch supporting the header.

The 9014 is just a bit stupid way of saying 9000.. The MTU is 9000 and the 14 extra is what they think is the layer two overhead (14 bytes at the front of the frame), but they also forgot the 4 byte CRC checksum at the end.

Two things however: the 8972 is the expected payload of vmkping and actually show that your network is working fine. For some details of this see: http://rickardnobel.se/troubleshoot-jumbo-frames-with-vmkping

Second: it does not matter if one part of the network should really be able to use, say 9200, and some other 9000. As long as your infrastructure (vSwitches and physical switches) also support at least 9000 then it would work fine. The hosts will not blindly send frames at their max size, but actually does a simple form of negotiation in the TCP session handshake.

This means that your machines will select the lowest common MTU, and the 9200 in your case will not be needed. See this for some details:

http://rickardnobel.se/different-jumbo-frames-settings-on-the-same-vlan

My VMware blog: www.rickardnobel.se

View solution in original post

0 Kudos
6 Replies
joshodgers
Enthusiast
Enthusiast

I wouldn't recommend upgrading switches just to get 9216 MTU support unless you have a specific requirement for 9216. (Interested to hear BTW).

If I am correct in assuming the reason for this is to improve performance for things like vMotion and IP Storage, then I would recommend you review the two below example architectural decisions I have written so you can understand the pros and cons of Jumbo (if you dont already).

The same decisions also apply to vMotion traffic.

http://www.joshodgers.com/2013/05/24/example-architectural-decision-jumbo-frames-for-ip-storage-do-n...

http://www.joshodgers.com/2013/05/24/example-architectural-decision-jumbo-frames-with-ip-storage-use...

Hope that helps save you some money you can invest in other areas of your infrastructure.

Josh Odgers | VCDX #90 | Blog: www.joshodgers.com | Twitter @josh_odgers
0 Kudos
zenking
Hot Shot
Hot Shot

Thanks, Josh. I just added my vm layout to my signature, but in case it doesn't show up here - the 5424s are dedicated to iscsi and vmotion traffic with separate vlans for each. We enabled jumbo frames last year and saw a dramatic performance increase in backups and vmotion, but not much in the statistical jobs our users run that contain large data sets, so that's what I have my sights on now. Our processors support ept and each host has 284 GB ram, so we almost never see cpu or memory peg on the windows guests even during intense statistical jobs by multiple users. These are all through rdp and all apps are installed on the vm, so all processing is being done on the vm - no client processing.

Our Win2k8 R2 VMs have the e1000 vnics, which appear to set 9014 MTU without any way to adjust. The vswitches and host nics are set to 9000, but my understanding is that they allow for the header overhead. The 5424 is stuck at 9000 and does not allow for the overhead. vmkping tests top out at 8972 (since the header is added), so the switch has to be the bottleneck unless I'm wrong about the vswitch supporting the header.

Interested in more thoughts on the matter, especially if I'm off base on anything.

VMWare Environment: vSphere 5.0, EQ PS6000 SAN, Dell R710 Hosts, quad Bcom NICs built in, quad Intel nics added, dedicated Dell 5424 switches w/ separate vlans for vmotion and iscsi.
0 Kudos
joshodgers
Enthusiast
Enthusiast

The fact you have dedicated switches for IP Storage and vMotion in my opinion makes Jumbo Frames more attractive, so that's one tick in the box.

Regarding your backups, by the sounds of it they are still agent based and therefor dependent on the network? This is something you should address as agent based backups (as you probably are aware) are a high impact to compute/network and storage and eliminating this overhead from your environment has numerous advantages.

For your W2k8 VMs, i would suggest upgrading them to VMXNET3 virtual nics as this will improve performance (where high transaction or throughput is required) and will allow you to custom set the MTU.

Here is an article explaining the various Virtual nic types.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100180...

However, if the processing is done on the VM and it does not have a dependency outside the VM itself, then I wouldn't expect the network performance for the VM (excluding for agent based backups) to improve with the VMXNET3 or Jumbo Frames as network traffic shouldn't be your bottleneck.

At the environment stands right now, I would investigate and confirm your not suffering any fragmentation, and if you are, lower the MTU until you can upgrade your switches.

You should also ensure the switches you buy meet your requirements and to do this, you should document your desired end state, and work backwards to ensure the new solution will achieve those goals. You also need to consider the fact your using 1GB, and how much more benefit you require from this upgrade. While its obviously more costly, moving to 10GB especially when using IP Storage (even when left on MTU1500) will give you a huge performance increase assuming your storage has 10GB ports.

Josh Odgers | VCDX #90 | Blog: www.joshodgers.com | Twitter @josh_odgers
0 Kudos
zenking
Hot Shot
Hot Shot

Thanks, Josh. I started testing vmxnet3 last week, but I didn't know I could set the mtu level. I found this info, so I'll test it out.

http://www.richard-slater.co.uk/archives/2009/10/23/change-your-mtu-under-vista-windows-7-or-windows...

Can you point me to a guide for vmxnet3 settings and recommendations? I've only been able to find a few things piecemeal.

The san is only Gb, so no reason to go to a 10 gb switch for now. However, switching all vm nics to vmxnet3 will probably allow us to forget about the switch upgrade for awhile.

Backups are running at night and most terminal server work is being done during the day. Tripling the backup speed helped us avoid backing up during work hours.

Thanks again.

VMWare Environment: vSphere 5.0, EQ PS6000 SAN, Dell R710 Hosts, quad Bcom NICs built in, quad Intel nics added, dedicated Dell 5424 switches w/ separate vlans for vmotion and iscsi.
0 Kudos
rickardnobel
Champion
Champion

zenking wrote:

Our Win2k8 R2 VMs have the e1000 vnics, which appear to set 9014 MTU without any way to adjust. The vswitches and host nics are set to 9000, but my understanding is that they allow for the header overhead. The 5424 is stuck at 9000 and does not allow for the overhead. vmkping tests top out at 8972 (since the header is added), so the switch has to be the bottleneck unless I'm wrong about the vswitch supporting the header.

The 9014 is just a bit stupid way of saying 9000.. The MTU is 9000 and the 14 extra is what they think is the layer two overhead (14 bytes at the front of the frame), but they also forgot the 4 byte CRC checksum at the end.

Two things however: the 8972 is the expected payload of vmkping and actually show that your network is working fine. For some details of this see: http://rickardnobel.se/troubleshoot-jumbo-frames-with-vmkping

Second: it does not matter if one part of the network should really be able to use, say 9200, and some other 9000. As long as your infrastructure (vSwitches and physical switches) also support at least 9000 then it would work fine. The hosts will not blindly send frames at their max size, but actually does a simple form of negotiation in the TCP session handshake.

This means that your machines will select the lowest common MTU, and the 9200 in your case will not be needed. See this for some details:

http://rickardnobel.se/different-jumbo-frames-settings-on-the-same-vlan

My VMware blog: www.rickardnobel.se

View solution in original post

0 Kudos
zenking
Hot Shot
Hot Shot

Thanks, Rickard. I've had your vmkping test article bookmarked for awhile, but I didn't find the equivalent test for dos ping until after I had posted my original message. We're going to allocate the $$$ to something else.

VMWare Environment: vSphere 5.0, EQ PS6000 SAN, Dell R710 Hosts, quad Bcom NICs built in, quad Intel nics added, dedicated Dell 5424 switches w/ separate vlans for vmotion and iscsi.
0 Kudos