I am currently having an issue with VAAI showing status "unknown" for our EMC SAN connected LUNs.
We do not see this issue with ESXi 4.1 using the same EMC SAN.
We are using FLARE 04.30.000.5.517 - which is the latest release from EMC.
We are using the same configuration on the SAN for ESXi 4.1 and 5.0.
Rescanning and rebooting has not solved the problem.
I have also done VM copy's on the disks and the status has NOT changed.
Is anyone else having this issue?
Is anyone else successfully using VAAI with EMC CX4-120 and ESXi 5.0 ?
The 'unknown' status means the ESX host hasn't tested all the VAAI primitives on those LUNs yet.
If all the tests pass then the status will change to 'supported'.
If any tests fail then the VAAI status will change to 'unsupported'.
VAAI may be working fine for you just not all the tests to confirm it have completed yet.
For more info check out the following KB article
I've read the KB article.
After cloning, it should show supported. I have done multiple clones. No change in status.
On a newly provisioned LUN in 4.1 in the same storage the status immediately goes to SUPPORTED.
Something is not right in 5.0
Same thing is happening to me. I have three HP DL980's (2 running 5.0) connected to a EMC VNX 5300. Both hosts running 5.0 have a status of unknown, while the 4.1 has a status of supported. I am using the 60 day eval for the 5.0.
I opened a case with vmware & emc. Hopefully they can solve this quick...
Did you ever get a resolution from EMC or VM?
The case is ongoing with VMware.
We have identified several things worth noting:
ON ESXi 5.0 -
It's as if the problem is specifically with VMFS5 on newly attached disks.
And a work around is to format VMFS3, then format VMFS5. As if the ESXi host "remembers" supportability??
This work around does not qualify as a solution for me, since this is the first node in a (soon to be) 3 node cluster. I am not confident that the workaround will be possible on the remaining ESXi cluster nodes - since the LUNS will have active VMs and cannot be formatted v3 and recreated as v5.
Did you ever find a solution for this issue. we are facing the same issue with our CX4-240 and CX4-120 systems
The case is still active with VMware. Unresolved.
Escelation engineers collected debug logging and they are working on the issue.
But the real question is - why do you care if it says 'supported' or 'unknown'. If the VAAI primitives are working (and thats pretty easy to check), then its just a cosmetic bug.
Because i dont know if it is working. I can see that it is enabled in the config, but is enable the same af working?
Med venlig hilsen/ Best regards
Dan Have Larsen
Senior IT Konsulent
Berendsen Textil Service A/S
Tobaksvejen 22
2860 Gladsaxe
Dir. tlf.: +45 39538562
Mobil: +45 23738562
Fax:
Email: dhl@berendsen.dk
From: Matt <communities-emailer@vmware.com>
To: Dan Have Larsen <dhl@berendsen.dk>
Date: 27-09-2011 19:06
The bug is not cosmetic.
VMware support confirmed that it is not working by looking at esxtop durring a clone operation.
We found the same "Unknown" status on two newly created 3TB VMFS 5 LUNs on our VNX 5300. Creating a 2TB LUN instead allows the status to change to "Supported".
Don't know if it's a viable option 'til EMC / VMware get things sorted, but it worked for us!
I dont think it is related to the size of the VMFS, we are using 512GB-1TB LUNS and we are facing the "unsupported" issue.
Confirmed what Eric1 said... the current work around is to create a new LUN using vmfs3 and 1mb block size, upgrade to vmfs5 and than expand the LUN.
I attempted the workaround mentioned here, and it seemed successful... However, upon a reboot of the ESXi 5.0 host, the VMFS 5 datastores created with the work around, once again showed "Unknown" in the Hardware acceleration column. We're using a VNX 5700 on Flare 05.31.000.5.008. I'm checking with our EMC folks to determine if the newer Flare code has any enhancements that will help resolve this problem.
In the new Flare code (05.31.000.5.502) release notes it mentions changes to support all of the VAAI specification requirements for ESXi 5...
Please post back if VMware Support comes back with a response.
Hmm i am on 5.31.000.5.011 on my VNX and 4.30.000.5.517 my CX4 and doesnt have that problem. Did you check your ESX's Storage Connectivity Status? It should be set to Failover Mode 4 AULA and not the default mode 1. You can check by clicking on Storage System Connectivity Status, and edit the esx host. You also should be using round robin or powerpath too. If it is on mode1, you will need to do the Failover Wizard to reset the mode, pick CLARiion Open as type. I did it while the hosts are still running.. but i would suggest doing it in maintenance mode just to be safe. Then prob want to rescan or reboot the host also to check
We are using failover mode 4 ALUA; though we are using Fixed, rather than Round Robin for our path selection. l'll make the change to Round Robin and post the results.
Oh sorry, round robin just helps with mulitpathing, but it won't help with fixing the VAAI status =(. I should have mentioned that round robin comment was for best practice policy and not related to VAAI.
I am at version 4.30.000.5.507 and seeing the issue. I will be upgrading to version 4.30.000.5.522 shortly, and will inform you of our findings after the upgrade
Med venlig hilsen/ Best regards
Dan Have Larsen
Senior IT Konsulent
Berendsen Textil Service A/S
Tobaksvejen 22
2860 Gladsaxe
Dir. tlf.: +45 39538562
Mobil: +45 23738562
Fax:
Email: dhl@berendsen.dk
From: Hpang <communities-emailer@vmware.com>
To: Dan Have Larsen <dhl@berendsen.dk>
Date: 07-10-2011 21:17