Hi all,
Would like to get input from members of the forum on optimizing NFS performance for vSphere 5.1. Performance has been good but I haven't found a lot of shared knowledge around Oracle Sun ZFS Appliances and vSphere 5.1 and would like to share what I've tried as well as hear what others are doing. Here's what I've got so far:
Networking
Storage
I'd love to see Oracle add VAAI in the future as Nexenta has done. Not sure if that's on their road map.
Anyone doing anything differently or have thoughts on how to optimize Oracle Sun ZFS Appliance performance in vSphere environments? I've read all the best practice docs from Oracle but they're pretty dated (still focused on vSphere 4.x) and not as thorough as what you see from NetApp.
Thanks,
Nate
Hi Nate,
We use Nexenta with vSphere 5.1, but with 10GbE hosts as well.
You probably should change the record size of your NFS volumes to 8 or 16 KB instead of 128KB if you need more IOPS instead of throughput. The VMware VMDK files are comparable to iSCSI volumes. If your VM's would also access NFS shares directly as well, you could/should set those shares back to 128KB.
For the SQL server you could create a separate NFS share/volumue with a 64KB record size for optimal performance. We use Oracle with Direct-NFS which is easier and faster because it uses the NFS shares directly, without a local VMware disk in between.
I would keep NFS as a datastore backing instead of iSCSI. NFS is much easier to setup and more space efficient. And I believe there is not much difference in performance.
Regards,
Dirk.
Sounds pretty well setup. I agree with Dirk about the record size. We've found that 16k is a pretty good sweet spot as your I/O will be fairly random with your majority of VMs seem to be J2EE web servers. You could consider have one share with a higher record size for your higher throughput/sequencial I/O VMs and lower record size for everything else (mixed loads).
Also, depending on what your 2 zil devices are, you could overrun them with 10 mirrors with the 15k drives. I've seen it.
This is a good thread to start for ZFS/VMware in general, not even that specific to Oracle ZFS.
Cheers,
Matt
Thanks - it's great to get some real world feedback on the record size. I definitely want to lean towards higher IOPS vs. throughput. I've seen conflicting advice on the subject but I put greater stock in what people who use vSphere say vs. those using Nexenta or Oracle Sun ZFS Appliances for something else. Great point about SQL Server, too.
Thanks again!
Thanks for this reply. 8k and 16k seemed a lot more reasonable to me than what we were using. It was actually someone on Oracle's forums who recommended the 128K record size (but they might not have been using it as an NFS datastore for vSphere).
After using HP Lefthand iSCSI for years, I'm definitely a bigger fan of NFS - especially for vCloud Director where you have a ton if virtual machines on the same datastore. I like the fact that I'm getting some major space savings going this route and I can use de-duplication if I like. In the past, I used VMware's Linked Clones in vCenter Lab Manager and found it to not perform well. As a result, I decided to not enable Fast Provisioning in vCloud Director since it's pretty much the same thing (though it might perform better now with VMFS5 but again, I'm a convert to NFS).
Regards,
Nate
Are you still using this setup? there was a NFS vaai plugin released for the 7320's back in September 2014. We have an almost identical setup that you have except we use iscsi instead of NFS.