VMware Cloud Community
jakas
Contributor
Contributor

VAAI and Template datastore

We have two clusters A and B.

Both clusters have bunch of datastores from the same EMC VNX array. All hosts on both clusters have VAAI enabled.

A template Datastore is mapped across both the Clusters from the same VNX array. The template datastore is visible from all host on both clusters.

We have one VM template on cluster A residing on the template datastore.

So, when we deploy a VM using the template to cluster A, it takes less than a minute to complete. We know ESXi is using VAAI and Array is doing a xcopy of the template.

But, when we deploy the same template to Cluster B, it takes 10 minutes, the CPU on the ESXi hosts starts going up for the duration of the deploy.

Why is this so slow or why does not the VAAI kick in when the  VM is being deployed to cluster B?

Additionally, when i create a template from a VM (convert to template) in Cluster B, the deploy time is very fast again.

So, basically, even if we have template datastore mapped across multiple clusters, we still need multiple templates, each created their own cluster to take advantage of  VAAI/xcopy.

is this correct?

Tags (2)
0 Kudos
2 Replies
MKguy
Virtuoso
Virtuoso

I think that's because it's always the template owner, the host where the template is registered on, that is doing the actual source-copy operation.

I suspect it goes like this:

In the case where the Template is registered on Host A in Cluster X, host A can access both, the source and destination storage volume. So it copies the files via VAAI and then the VM is registered to it's final target host.

Now when you deploy the template to a different host (what matters is storage access, not cluster participation), it's still host A where the template is registered on who delivers the source data. However, in your case host A is unable to access the destination volume, so it will communicate with the target host and copy the files over the network via NFC.

To confirm this, you can check (r)esxtop or network performance charts and you should see a high amount of network traffic between the template owner host and the target host during deployment.

There are also several other cases where VAAI might not be used at all, like different VMFS versions/block sizes. If you were upgrading from VMFS3 to VMFS5 the old block size was retained, while VMFS5 only uses a uniform 1MB block size.

Also see: Frequently Asked Questions for vStorage APIs for Array Integration

You should check these things too:

VAAI hardware offload cannot be used when:

    The source and destination VMFS volumes have different block sizes

    The source file type is RDM and the destination file type is non-RDM (regular file)

    The source VMDK type is eagerzeroedthick and the destination VMDK type is thin

    The source or destination VMDK is any kind of sparse or hosted format

    Cloning a virtual machine that has snapshots because this process involves consolidating the snapshots into the virtual disks of the target virtual machine.

    The logical address and/or transfer length in the requested operation is not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)

    The VMFS datastore has multiple LUNs/extents spread across different arrays

Note:

    When using enhanced vMotion (move both VM and datastore simultaneously) the source and destination hosts must both see the source and destination datastores, or the file transfer defaults to to NFC over the the management network, instead of VAAI.

    Hardware cloning between arrays (even if within the same VMFS datastore) does not work.

    For information on supportability with Horizon View, refer to KB Article View Composer API for Array Integration (VCAI) support in VMware Horizon View (2061611).

-- http://alpacapowered.wordpress.com
0 Kudos
jakas
Contributor
Contributor

Thanks for the reply.

What you mention below regarding the host and not the cluster access to destination - makes sense. let me test it out.

0 Kudos