VMware Cloud Community
jgoldschrafe
Contributor
Contributor

Error: Unable to mount this VMFS volume due to duplicate extents found

Posting this here in the hopes that someone, by some stroke of luck, has a workaround while I twiddle my thumbs and wait for VMware to actually get back to me with the details of my support entitlement which magically aren't available because "the system is down" when I call for support on a dead-in-the-water production system. Smiley Happy

I seem to have gotten myself into a little bit of a situation.

[root@esx01 ~]# esxcfg-volume -l
VMFS3 UUID/label: 4bc639b4-21bbc059-d77b-e41f132c2a8a/shared-esxdev
Can mount: No (duplicate extents found)
Can resignature: No (duplicate extents found)
Extent name: naa.600a0b800047f5f20000bc934bf1480e:1     range: 0 - 1279487 (MB)
Extent name: naa.600a0b80006e09620000bc914bf14835:1     range: 511744 - 1535487 (MB)

[root@esx01 ~]# esxcfg-volume -m shared-esxdev
Mounting volume shared-esxdev
Error: Unable to mount this VMFS3 volume due to duplicate extents found

Here's the background:

Our SAN is active-passive. I created two extents and load-balanced them between the two controllers on the SAN. At some point, I grew the extents -- this has been working fine in production for months. Today, we had some issues on our production SAN and had to fail over to some LUNs at our DR site, which have a different NAA ID than the old LUNs. Now, suddenly, VMware refuses to do anything with these LUNs whatsoever, apparently because the ranges for the extents overlap (even though they really don't). I can't mount or resignature them or do anything at all with them except watch ESX complain about them.

I have production data on these volumes, and I don't have the time to manually restore my VMs from file-level backups if I can avoid it. Help? Smiley Sad

Tags (4)
Reply
0 Kudos
1 Reply
zicklacekic
Contributor
Contributor

I have the same error with one exception:  In my environment both extends are same.. I have mapped a volume directly to a storage server VM via RDM. Then from inside the storage VM I have mapped that volume to the ESX again. This is why I uyse a caching mechanism inside that storage server VM. At first it was working fine. When I unmap the volume from within the storage server VM and remap it using another iSCSI target name I had the same issue above. Now I cannot access to that volumes and data inside it because I cannot mount it.

Reply
0 Kudos