I have a HDS VSP running microcode 70-01-62-00/00 and a ESXi hosts running 4.1.0, 348481. Two LUN's presented to the host. Both LUN's is 500GB Dynamic Provisioned and formated as VMFS with 8MB blocksize. The VM has one 30GB thin vmdk and two 20GB eagerzeroedthick vmdk.
Host mode option 54 is enabled on the host groups on my ports.
Doing a storage vmotion with VAAI is slower than when not using VAAI. I can see that it is offloading to the array. One thing that strikes me is that kernel latency (KAVG/cmd) rises to 120ms during the operation. Also if I look at the VAAI throughput it is around 50MB/s worse than using the legacy datamover.
Have any of you guys experienced similar results?
Sounds like you aren't doing anything wrong and that you need to talk to Hitachi.
I've heard of other VSP users seeing worse performance with VAAI enabled (unlike many of the other vendors (3PAR, HP, EMC, NetApp), Hitachi is very new to the VAAI party and this is their 1.0 effort, so it could be buggy.
thanks for your answer, I have opened a support case with Hitachi. I will follow up on it in this discussion later.
I have no experience with VAAI from other arrays, but can you tell me if it's normal that KAVG raises to 100ms?
I have a similar environment to yours. The VSP's are at the same microcode, same vSphere version and patch level, etc. What I'm seeing is an actual improvement in the overall performance of a storage vMotion. Of course, I'm coming off of a Xiotech Magnitude 3D environment (i.e. NO read cache and limited write caching). Therefore, I have quite a jump in all storage-related performance.
Ultimately, it's all relative...
Seems like my VAAI issue is fixed.
Last night our HDS Engineer upgraded to the newest microcode (70-02-05). Improvements from the previous microcode is huge, every operation is now faster with vaai enabled.
I guess the fix is: Call HDS and get the newest microcode installed. It definitely solved my problem