Do you got a link to this info? I can´t find it.
I would like to know too!!
The NAS to which we backup our VMs supports jumbo frames.
Thank you, Tom
You'll find what you want here:
Just search for Jumbo.
You can't expect VMware to support everything on day1. Loads of the new features are experimental and have limited support. I'm sure they're currently focusing on the priority stuff.
I would have hoped both 10GigE and Jumbo Frame support to be prioritized for even better performance for NFS storage solutions.
Just can't see why ...
New features are often introduced with restrictions (for example, no jumbo frame support with NAS and iSCSI). This is not an indication that it won't work, it is strictly from a quality assurance standpoint. When sufficient QA testing has shown that a feature is solid in a particular environment, it usually becomes "supported." QA is an ongoing process that we take very seriously, but we are also "in tune" with our customers' needs and requirements. I wouldn't be surprised to see these features fully supported in due time.
So if I understand you correct here then it's working (jumbo frames to NFS), we can test it, but we'll stand on our own until it's fully supported by VMware?
That's what I was trying to say.
Loads of new features will work even if stated as not supported.
It's like running ESX with a single CPU will work, but VMware will laugh when you have any issues.
I guess we won't see a list of 'new feature that's working but we're not supporting it just yet' coming from VMware. Would be nice though if someone out here could verify that jumbo frames for NFS storage is in fact working.
You can enable it on your vswitch where you have sc/vmkernel/nfs connection by going into the command line and using...
esxcfg-vswitch -m 9000 <vSwitchX>
Haven't seen any jumbo's come over the wire yet though....
if you do esxcfg-vmknic -l it shows the vmkernel still at 1500 so you probably need to create vmk's that have 9000mtu for jumbo nfs to work. Since I have separate vmk's for each of my nfs links I will test this out and let you all know how it works.
after doing esxcfg-vmknic/esxcfg-vswitch and setting mtu's for both to 9000
Jumbo is working for both nfs/software iscsi
esxcfg-mpath --lun vmhba32:0:8 --policy custom --custom-hba-policy any --custom-max-blocks 1024 --custom-max-commands 50 --custom-target-policy any
esxcfg-mpath --lun vmhba32:0:8 --policy custom --custom-hba-policy any --custom-max-blocks 4096 --custom-max-commands 100 --custom-target-policy any
Along with either of the above for all of my luns makes for a party, the second one seems to perform slightly better on 100% read 0%random
300MB/sec read throughput balanced across 3 software iscsi links with 9000mtu
Software iSCSI is definately performing alot better throughput wise/io wise, but it uses alot more cpu.
2x Hardware iSCSI 4050 hba's 150MB/sec 4500 IOs 9000mtu 11% cpu
3x Software iSCSI links 300MB/sec 8900 IOs 9000mtu 30-80% cpu
How exactly do you have your software iscsi links configured to get it to use all 3?
3x vSwitches with....
--1x Service Console
Then just make sure you set the mtu to 9000 for the 3 vswitches and the VMKernel and you should be rocking.
Better to not use link aggregation in this instance and you still get fault tolerance because you have 3 iscsi sessions which = 3 paths to storage.
Are you using Round Robin load balancing or just multiple luns?
When I try to add a second VMKernel that is on the same subnet as an existing VMKernel, I get an error message. Do you have your three VMKernel interfaces on the same subnet? If not, how is your subnetting setup?