vMotioning Thin disks from Oracle NAS (NFS) to NetApp NAS (NFS) results in thick-EZ-disk
I've got an issue that I can't quite get an answer to...from VMware or NetApp...at least as of yet anyways. I've got an Oracle 7420 which has caused us some heartache with VMware living on it. We've decided to migrate everything to a brand new NetApp 3250 cluster. I have the VAAI Plugins loaded and have provisioned the datastores using the latest NetApp VSC. The On-Tap version is 8.1.2 P3 in 7-mode. ESXi hosts are on 5.1 U1. I have not yet upgraded vCenter to 5.1 U1 from 5.1.0B. If anyone has any ideas, I'd love to hear them! (and Thank You in advance!)
I've got a few VMs that I've migrated over from the 7420 and I couldn't figure out exactly why I was chewing up so much more space on the NetApp than was in use on the Oracle system. I found that on some of my VMs, some of the larger 100GB-provisioined disks were converted from thin provisioned (roughly 30-60% used) to thick, eager zeroed. As such, a lot of space was eaten up that was unnecessary. During my storage vMotion task, I'd specifically say Thin Provisioned (not just same as source).
I believe that Hole Punching/UNMAP is enabled/used by the storage system - some of my VMs with multiple 100GB disks all came over as thin provisioned as expected. Some of the VMs with multiple 100GB disks came over half as thin, half as thick-ez. When I move the individual disks back to the original storage system, they return to Thin provisioned format. When I bring them back to the NetApp, it goes to thick-ez again.
I ran some commands to ensure my host has the VAAI plugin loaded and is active.
~ # esxcli storage core claimrule list --claimrule-class=Filter