We have two ESXi 5.0 clusters that are connected to two unique HP 24K SANs and those SANS are replicated between each other. We have a need to transfer all of the VMs from one Cluster to the other. The idea was that we would use SAN replication to block copy the LUNs from one SAN to the other and present them to the cluster on the other side. We have done this and the ESXi hosts can see the storage but when I try to add them as a datastore the only option I have available is to format the LUN. It seems as if it is not reading the file system on the replicated datastore. The DS's are formatted with VMFS 5 and the SAN is back in Simplex mode after the replication. Has anyone seen this before or will this not work for some reason? Any ideas?
Hi,
First it should get mounted right away, if for some reason, it is detected as snapshot lun, then you should have option of force mounting it.
Can you share screen shot when you try to add datastore.
Regards
Mohammed
Here is what I get. I can see the LUN in the Add Storage area but when I try to add it I only get the option to format that storage, instead of keeping or assigning a new VMFS volume signature. So it is like it can't read the storage other than as blank storage.
Hi,
This is not expected bahvior, if the replication is completed correctly. Check if partition table is in place
partedUtil getptbl /vmfs/devices/disk/naa.XXXXX
Regards
Mohammed
Hi,
Check the partition table, http://kb.vmware.com/kb/1036609
Also I would like to know if you have opened Support Reques with VMware ?
Regards
MOhammed
Here is the output from the command. It appears to be a VMFS disk. Yes I have opened a VMware support case, but it is set at a level 3 so I haven't heard back yet.
/dev/disks # partedUtil getptbl "/vmfs/devices/disks/naa.60060e8005bf81000000bf8100000100"
msdos
65270 255 63 1048576000
1 128 1048562549 251 0
Hi,
At original site, is it VMFS 3.X version of datastore, since I see it start at 128, if it is VMFS 5.x version then starting sector has to be 2048.
Also run this command hexdump -C "/vmfs/devices/disks/naa.60060e8005bf81000000bf8100000100" | grep -i 01400000 -A20 -B20
hexdump -C "/vmfs/devices/disks/naa.60060e8005bf81000000bf8100000100" | grep -i 01310000 -A20 -B20
This command will give us information whether the VMFS header is available or not in replicated datastore, if is it is available we can recover datastore, it is not then you have to re-replicate the datastore.
Regards
Mohammed
Here is the results of that command.
/dev/disks # hexdump -C "/vmfs/devices/disks/naa.60060e8005bf81000000bf8100000100" | grep -i 01310000 -A20 -B20
001ceb80 01 cb 07 00 00 00 00 00 00 00 00 10 b1 7c 00 00 |.............|..|
001ceb90 00 00 00 00 b0 7c 00 00 00 00 00 00 10 00 00 00 |.....|..........|
001ceba0 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
001cebb0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
001cec00 01 cc 07 00 00 00 00 00 00 00 00 10 c1 7c 00 00 |.............|..|
001cec10 00 00 00 00 c0 7c 00 00 00 00 00 00 10 00 00 00 |.....|..........|
001cec20 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
001cec30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
001cec80 01 cd 07 00 00 00 00 00 00 00 00 10 d1 7c 00 00 |.............|..|
001cec90 00 00 00 00 d0 7c 00 00 00 00 00 00 10 00 00 00 |.....|..........|
001ceca0 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
001cecb0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
001ced00 01 ce 07 00 00 00 00 00 00 00 00 10 e1 7c 00 00 |.............|..|
001ced10 00 00 00 00 e0 7c 00 00 00 00 00 00 10 00 00 00 |.....|..........|
001ced20 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
001ced30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
01310000 5e f1 ab 2f 04 00 00 00 2e 28 d5 70 4e d9 31 e6 |^../.....(.pN.1.|
01310010 8d f6 1a 00 26 55 dc bf ed 02 00 00 00 50 52 4f |....&U.......PRO|
01310020 44 5f 41 50 50 5f 53 54 44 50 45 52 46 30 31 00 |D_APP_STDPERF01.|
01310030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
01310090 00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 00 |................|
013100a0 00 00 00 10 00 00 00 00 00 28 d5 70 4e 01 00 00 |.........(.pN...|
013100b0 00 27 d5 70 4e f5 a5 e0 c4 41 08 00 26 55 dc bf |.'.pN....A..&U..|
013100c0 ed 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
013100d0 00 00 00 01 00 20 00 00 00 00 00 01 00 00 00 00 |..... ..........|
013100e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
01410000 02 ef cd ab 00 00 30 00 00 00 00 00 a1 b6 00 00 |......0.........|
01410010 00 00 00 00 71 3e ad 77 13 04 00 00 53 c0 aa 50 |....q>.w....S..P|
01410020 02 87 22 1d 4d 6d 00 26 55 dc bf ed 01 68 29 00 |..".Mm.&U....h).|
01410030 0e 00 00 00 36 00 00 00 00 00 00 00 00 00 00 00 |....6...........|
01410040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
01410200 01 ef cd ab 00 02 30 00 00 00 00 00 00 00 00 00 |......0.........|
01410210 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
Hi,
I see that it has VMFS header in place . Is "D_APP_STDPERF01" is datastore name ? It is VMFS 3.0 version.
Also can you run this command ,
esxcli storage vmfs snapshot list
Regards
Mohammed
The DS name is PROD_APP_STDPERF01, it looks like it is there just split between lines. There is a snapshot of that LUN listed and it is talking about duplicate extents.
/dev/disks # esxcli storage vmfs snapshot list
4e70d528-8de631d9-1af6-002655dcbfed
Volume Name: PROD_APP_STDPERF01
VMFS UUID: 4e70d528-8de631d9-1af6-002655dcbfed
Can mount: false
Reason for un-mountability: duplicate extents found
Can resignature: false
Reason for non-resignaturability: duplicate extents found
Unresolved Extent Count: 2
Hi Nic,
Yes , I missed it PROD_APP_STDPERF01, I see that you have mentioned "The DS's are formatted with VMFS 5 and the SAN is back in Simplex mode after the replication." but DR site VMFS look version 3.x as per hexdump.
Would like to know whether there was any extent on datastore on primary site.
Regards
Mohammed
There is not an extent on the primary site, it is a single 500GB LUN presented. You are correct that it was a VMFS 3 on the source side, I thought it was 5. I can upgrade it and resynch if that may help.
Hi,
To be honest not sure. Worth trying it. However you can replicate one more dummy lun from primary to seconday and see if you can mount, with VMFS 5.0 version.
Regards
Mohammed
As an addition: We have two other datastores on this cluster that are running VMFS 5.54 and I get the same behavior, here are the output of the commands we previously ran.
The datastore name is PROD_APP_STDPERF10:
/dev/disks # partedUtil getptbl "/vmfs/devices/disks/naa.60060e8005bf81000000bf8100000102"
gpt
65270 255 63 1048576000
1 2048 1048562549 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
50be696b-d246d31f-716d-002655dcbfed
Volume Name: PROD_APP_STDPERF10
VMFS UUID: 50be696b-d246d31f-716d-002655dcbfed
Can mount: false
Reason for un-mountability: duplicate extents found
Can resignature: false
Reason for non-resignaturability: duplicate extents found
Unresolved Extent Count: 2
Also there are no extents on this DS in the source either.
Not sure if this applies. You mentioned two clusters. Are these clusters managed by the same vCenter Server? In this case I could think of an issue with duplicate datastore names and it may be worth a try to temporarily rename the existing (mounted) datastore name, to see whether this allows to mount the replica which currently has the same name.
André
Hi,
As A.P said, if these two cluster in same vCenter server , then it will not allow it to mount. For testing , from DR site, put one host in maitenance mode and take it out from cluster . Once it is out from cluster it will allow replicated datastore to mount.
Regards
Mohammed
Each cluster is managed by a seperate VCenter server.
Hi,
Have you presented same replicated LUN twice on DR Site.
Regards
Mohammed
The storage is presented in a paired state between PROD and DR and SRM is controlling that connection. Then there is a seperate SAN replication of the LUNS from PROD to PREP and that is a one time copy and is not in a paired state. So for the DR LUNs we do not mount them directly only through SRM but the Prep LUNs are what we are tying to mount. I hope that clears it up.