VMware Cloud Community
sture
Enthusiast
Enthusiast

VMware 3.5 - dissapointing news!

Qouting "What's new and improved"

"NFS running over 10GigE cards is not supported in ESX Server 3.5"

and further down

"Jumbo frames are not supported for NAS and iSCSI traffic"

Anyone know if we could be expecting this to be supported in the near future?

Reply
0 Kudos
19 Replies
Yps
Enthusiast
Enthusiast

Do you got a link to this info? I can´t find it.

/Magnus

Reply
0 Kudos
tlyczko
Enthusiast
Enthusiast

I would like to know too!!

The NAS to which we backup our VMs supports jumbo frames.

Thank you, Tom

Reply
0 Kudos
MR-T
Immortal
Immortal

You'll find what you want here:

http://www.vmware.com/support/vi3/doc/whatsnew_esx35_vc25.html

Just search for Jumbo.

You can't expect VMware to support everything on day1. Loads of the new features are experimental and have limited support. I'm sure they're currently focusing on the priority stuff.

Reply
0 Kudos
sture
Enthusiast
Enthusiast

I would have hoped both 10GigE and Jumbo Frame support to be prioritized for even better performance for NFS storage solutions.

Just can't see why ...

Reply
0 Kudos
PaulLalonde
Contributor
Contributor

New features are often introduced with restrictions (for example, no jumbo frame support with NAS and iSCSI). This is not an indication that it won't work, it is strictly from a quality assurance standpoint. When sufficient QA testing has shown that a feature is solid in a particular environment, it usually becomes "supported." QA is an ongoing process that we take very seriously, but we are also "in tune" with our customers' needs and requirements. I wouldn't be surprised to see these features fully supported in due time.

Regards,

Paul

Reply
0 Kudos
sture
Enthusiast
Enthusiast

So if I understand you correct here then it's working (jumbo frames to NFS), we can test it, but we'll stand on our own until it's fully supported by VMware?

Reply
0 Kudos
MR-T
Immortal
Immortal

That's what I was trying to say.

Loads of new features will work even if stated as not supported.

It's like running ESX with a single CPU will work, but VMware will laugh when you have any issues.

Reply
0 Kudos
sture
Enthusiast
Enthusiast

I guess we won't see a list of 'new feature that's working but we're not supporting it just yet' coming from VMware. Would be nice though if someone out here could verify that jumbo frames for NFS storage is in fact working.

Reply
0 Kudos
aworkman
Enthusiast
Enthusiast

You can enable it on your vswitch where you have sc/vmkernel/nfs connection by going into the command line and using...

esxcfg-vswitch -m 9000 <vSwitchX>

Haven't seen any jumbo's come over the wire yet though....

if you do esxcfg-vmknic -l it shows the vmkernel still at 1500 so you probably need to create vmk's that have 9000mtu for jumbo nfs to work. Since I have separate vmk's for each of my nfs links I will test this out and let you all know how it works.

Reply
0 Kudos
aworkman
Enthusiast
Enthusiast

after doing esxcfg-vmknic/esxcfg-vswitch and setting mtu's for both to 9000

Jumbo is working for both nfs/software iscsi

esxcfg-mpath --lun vmhba32:0:8 --policy custom --custom-hba-policy any --custom-max-blocks 1024 --custom-max-commands 50 --custom-target-policy any

esxcfg-mpath --lun vmhba32:0:8 --policy custom --custom-hba-policy any --custom-max-blocks 4096 --custom-max-commands 100 --custom-target-policy any

Along with either of the above for all of my luns makes for a party, the second one seems to perform slightly better on 100% read 0%random

300MB/sec read throughput balanced across 3 software iscsi links with 9000mtu Smiley Wink

Software iSCSI is definately performing alot better throughput wise/io wise, but it uses alot more cpu.

2x Hardware iSCSI 4050 hba's 150MB/sec 4500 IOs 9000mtu 11% cpu

3x Software iSCSI links 300MB/sec 8900 IOs 9000mtu 30-80% cpu

Reply
0 Kudos
chucks0
Enthusiast
Enthusiast

How exactly do you have your software iscsi links configured to get it to use all 3?

Reply
0 Kudos
aworkman
Enthusiast
Enthusiast

3x vSwitches with....

--1x VMKernel

--1x Service Console

on each...

Then just make sure you set the mtu to 9000 for the 3 vswitches and the VMKernel and you should be rocking.

Better to not use link aggregation in this instance and you still get fault tolerance because you have 3 iscsi sessions which = 3 paths to storage.

Reply
0 Kudos
chucks0
Enthusiast
Enthusiast

Are you using Round Robin load balancing or just multiple luns?

Reply
0 Kudos
chucks0
Enthusiast
Enthusiast

When I try to add a second VMKernel that is on the same subnet as an existing VMKernel, I get an error message. Do you have your three VMKernel interfaces on the same subnet? If not, how is your subnetting setup?

Reply
0 Kudos
sture
Enthusiast
Enthusiast

Hi,

I've been reading your other question here 'TOE/TSO/IOAT Functionality' where you state 'my comparison of iSCSI hba's to NFS had NFS being the winner in throughput and io/response/etc' only that it induced a higher load on the CPU compared to iSCSI HBA's.

NFS is where I want to go. Have you been able to test NFS for .vmdk storage on your NetApp using 3.5 and jumbos yet? What are the results?

Reply
0 Kudos
aworkman
Enthusiast
Enthusiast

esxcfg-mpath --lun vmhba32:0:8 --policy custom --custom-hba-policy any --custom-max-blocks 2048 --custom-max-commands 50 --custom-target-policy any

The above is what i'm using for path balancing which is a custom policy. It is essentially Round-Robin with some tweaking.

I did test NFS with jumbos, but decided against it due to the ability to use the custom multipathing on iSCSI to achieve single VM throughput performance. Whereas with NFS you can't round-robin it so you get single link performance from each vm.(Not that you can't have multi links/data stores spread across all of your vm's)

So overall throughput/ios turned out to be higher on iSCSI once I tweaked the multipathing options like above so that it used all paths at the same time for single vm aggregate throughput.

Reply
0 Kudos
chucks0
Enthusiast
Enthusiast

3x vSwitches with....

--1x VMKernel

--1x Service Console

on each...

Then just make sure you set the mtu to 9000 for the 3 vswitches and the VMKernel and you should be rocking.

Better to not use link aggregation in this instance and you still get fault tolerance because you have 3 iscsi sessions which = 3 paths to storage.

Are your three VMKernel interfaces on separate subnets? Everytime i attempt to add another VMKernel which is on the same IP subnet as an existing one, I get an error message. What iSCSI hardware are you using?

Reply
0 Kudos
aworkman
Enthusiast
Enthusiast

Yeah, to do my testing I had 3x crossover cables jacked directly into my storage to eliminate any possible performance issues on our ethernet switching fabric.(Our BlackDiamond does weird things some times). So it was 3x subnets on 3x nics.

Hardware i'm using is

2x QLogic 4050c iSCSI HBA's

Software iSCSI still performs way better throughput/io wise, but it uses comparable cpu to NFS over 3 links. So I'm sticking to the hardware HBA's with slightly lower performance but leaving more CPU free to my VM's.

This is a server running 8x VM's for Disaster Recovery.

2x Quad Core Xeon 2.66Ghz processors

24GB of memory

300MB/sec of NFS/iSCSI throughput was using anywhere between 30-80% of all 8 cores compared to the 10-20% using Hardware iSCSI.

This was using IOmeter from 1 VM using 3x 4gb test file 100% sequential read spread across 3 vmdks each on a different LUN on a Netapp FAS3020 with 45x SATA Drives.

Reply
0 Kudos
Raudi
Expert
Expert

And make shure that the pysical nic supports jumbo frames. My Intel 82573E does not...

Reply
0 Kudos