- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ESXi 4.x vMotion MTU
Hi,
Do we have to set the vMotion MTU size 9000 end to end when using ESXi 4.x with EMC FC SAN?
I have just Joined new company and i have noticed that the vds is not set to 9000 mtu for vmotion portgroup. also i have checked esxcfg-vmknic -l and the mtu is set to 1500.
I have worked before with ESX and ISCSI SAN where i have to set the mtu size to 9000 to the ISCSI and vmotion switchesnd and also the vmk intefaces.
Many thanks for anyhelp
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Enabling Jumbo frames for vMotion network is recommended for best performance. if you are using storage through NFS/iSCSI then enabling jumbo frames make sense to increases performance.
-vCloud9
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How do i check if the jumbo frames are working because as i said before the mtu size is not set to 9000 on the Vmotion switch and vmk. However when i do
vmkping -s 9000 anotherhostname
or
vmkping -s 9000 VMotionIPaddress of another host
both command seems to be working fine but I was expecting it fail
see output
~ # vmkping p-dc-esx02 -s 9000
PING p-dc-esx02 (10.1.254.88): 9000 data bytes
9008 bytes from 10.1.254.88: icmp_seq=0 ttl=64 time=0.278 ms
9008 bytes from 10.1.254.88: icmp_seq=1 ttl=64 time=0.285 ms
9008 bytes from 10.1.254.88: icmp_seq=2 ttl=64 time=0.507 ms
~ # vmkping -s 9000 10.1.243.5
PING 10.1.243.5 (10.1.243.5): 9000 data bytes
9008 bytes from 10.1.243.5: icmp_seq=0 ttl=64 time=0.338 ms
9008 bytes from 10.1.243.5: icmp_seq=1 ttl=64 time=0.261 ms
9008 bytes from 10.1.243.5: icmp_seq=2 ttl=64 time=0.331 ms
so it seems like the mtu 9000 seems to be working although it is not been set on the dvswitch and vmk. is this correct?
please note: we don't have NFS/ISCSI storage but FC EMC SAN so the above issue is only relating to vMotion
thanks,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am little confused, AFAIK, if jumbo frames are not enabled on NIC's/vSwitch/VMK you shouldnt be receiving the response.
-vCloud9
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That is exactly my point. it shouldn't work but it is. This is why i posted this question as i dont have experience with ESXi or FC SANs.
is it possible that ESXi does not need Jumbo Frames for vMotion?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I remember reading one of VMware docs, clearly stating that for best performance VMware recommends enabling Jumbo frames on vMotion network as well.
-vCloud9
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
It doesn't "need" jumbo frames, it's just a best practice to improve performance.
I have little experience with vmkping, but according to this kb, it should be used with -d option too.
Does it make any sense?
Regards,
elgreco81
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Jumbo Frames are for ethernet network according to this
http://communities.vmware.com/thread/400100
Regards,
elgreco81
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
FC SANs don't use an MTU so you don't need to take that into consideration. MTU and Jumbo Frames are Ethernet concepts. A Jumbo Frame is just frames that are greater than the conventional frames of 1500. As algreco81 said, it is better to have a higher MTU for performance reasons, but it is not a requirement for migrations to work. It only improves VM migrations (copying the running instance of a VM and its active RAM to from one host to another). The storage stays in place.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi elgreco81
Thanks for your reply.
I have seen these link before But my question is about vMotion and NOT storage. I do understand that ISCSI storage need MTU 9000 and also vMotion need that as well as i worked before with ESX and ISCSI storage where i had enabled mtu 9000 to vMotion and storage.
But this discussion was about vMotion and ESXi 4.x.
The vmkping -d would have been the correct test but my ESX4.0 the -d option does not exist. See below link
As a result i cannot test it. and if i use just vmkping -s 9000 that seems to work fine although the mtu is not set on vds and uplink which makes me wonder how it is possible
Regards,
Eagle11