We have installed a VSA 5.5 cluster, providing NFS datastores to our hosts. We use a lot of Freebsd VMs with the root on zfs in our enterprise. I have the occasion to build a new VM and thought it may be interesting to see the performance using raidz under the zfs root, atop multiple vmdk files. So I when I build the VM I'm thinking I'd create 4ea 6GB vmdk disks. Then in the OS I'd install with ZFS root and create a raidz pool for about ~16GB of storage from the 4ea 6GB vmdk files. Typically we install zfs root onto one 16GB disk.
My initial thought is I really won't see any performance increase and the only benefit will be zpool's ability to cleanup better if it sees an issue.
I'd appreciate any thoughts you may have.
So I did some testing. I ran an installation of Freebsd 10.2, with zfs root, and three different vmdk configurations. 1ea 16GB vmdk disk, 4ea 6GB vmdk disks, and 8ea 3GB vmdk disks. After the installation I collected the disk performance charts from VeeamOne. You can see the comparison charts here: https://www.evernote.com/l/ABvtc_OW82dD1J-xmhrqj5k_vHgrKapKCiA
I think I'm going to install with 4ea 8GB vmdk disks.