VMware Cloud Community
GuyTC
Enthusiast
Enthusiast

Duplication of Datastore Names in SRM

A question regarding SRM and storage....

srm datastore duplication.png

The problem is this: In the SRM view I see 4 datastores that have 2 names...and I can sort of see how its been caused...If we take the red 'team' as the example:
R1 = 0D97 (SAN_INT_PRD_04_009) & its R2 = 0D37.
However 0D37 also = R1 for  SAN_INT_PRD_04_005
somewhere SRM is reading this data twice and duplicating it.
Anyone come across this behavior before. Should I be concerned about it (I am !!). I've done the usual things of reboot SRM etc but it seems persistent. the $1m question is will it work in a failover - and the answer to that is i haven't tried....yet...(and why isn't it using naa to distinguish between devices instead of hex id ??)
Cheers
Guy

VC 5, ESX 4.1.2, SRM/SRA 5.01, EMC VMAX Enginuity 5875.231.172, SYMAPI 7.4, SRDF / A

Tags (2)
0 Kudos
7 Replies
kastlr
Expert
Expert

Hi,

are you using VMFS extends or did you use them earlier?

Or did you re-arrange the relationship between local and remote storage devices?

Regards,

Ralf


Hope this helps a bit.
Greetings from Germany. (CEST)
0 Kudos
GuyTC
Enthusiast
Enthusiast

Hi Ralf

No to extents.

and no to rearranging the relationship between R1 and R2...however these LUNs were removed from one RDF Group and added to another

Actually Ralf - what I've written above about them being removed and added to another RDF Group is not true.....

0 Kudos
kastlr
Expert
Expert

Hi,

based on your screenshot you're currently replicating from VMAX 2937 to VMAX 3125.

So any device located in the local device colum belongs to VMAX 2937 and is acting as a R1 device.

Therefor there's no duplicate usage of device 0D37 in the same array, instead you're using 0D37 at both arrays.

Some is true for the other devices in question (0D2F, 0D3F, 0D4F)

Could you provide the output of the following command from an ESX Server accessing the R1 array.

esxcfg-scsidevs -m

This will generate a list of all VMFS datastore seen on that host.

Regards,

Ralf


Hope this helps a bit.
Greetings from Germany. (CEST)
0 Kudos
GuyTC
Enthusiast
Enthusiast

Yes - I realise there is no issue of duplication of storage and that they exist on different SANs. The problem is the names appearing twice in SRM. ESX and VC see the datastore names without problem. The EMC VSI confirms that everything is configured fine in the underlying storage. SRM is getting confused by the identical hex id and duplicating this confusion in its output.

Output of esxcfg-scsidev command

~ # esxcfg-scsidevs -m
naa.60000970000292602937533030443937:1     /vmfs/devices/disks/naa.60000970000292602937533030443937:1 4ff1b934-11caec98-e74e-9c8e9921383a  0  SAN_INT_PRD_04_009
naa.60000970000292602937533030443846:1     /vmfs/devices/disks/naa.60000970000292602937533030443846:1 4ff1b932-73bdee66-2609-9c8e992159ca  0  SAN_INT_PRD_04_008
naa.60000970000292602937533030443046:1     /vmfs/devices/disks/naa.60000970000292602937533030443046:1 4fbfa35e-da23c26a-ab64-9c8e99212c9a  0  SAN_INT_PRD_04_007
naa.60000970000292602937533030443137:1     /vmfs/devices/disks/naa.60000970000292602937533030443137:1 4fbfa347-e2db49ae-6b7e-9c8e99212c9a  0  SAN_INT_PRD_04_006
naa.60000970000292602937533030443246:1     /vmfs/devices/disks/naa.60000970000292602937533030443246:1 4fb3a5b7-c2634578-1c89-9c8e99212c9a  0  SAN_INT_PRD_04_004
naa.60000970000292602937533030443146:1     /vmfs/devices/disks/naa.60000970000292602937533030443146:1 4fb3a592-bce38778-247d-9c8e99212c9a  0  SAN_INT_PRD_04_002
naa.60000970000292602937533030433443:1     /vmfs/devices/disks/naa.60000970000292602937533030433443:1 4fb3a57f-c87542e0-4fbb-9c8e99212c9a  0  SAN_INT_PRD_04_001
naa.60000970000292602937533030454146:1     /vmfs/devices/disks/naa.60000970000292602937533030454146:1 4fe20363-d7756fb4-3819-9c8e992159ca  0  SAN_INT_PRD_04_013
naa.60000970000292602937533030454137:1     /vmfs/devices/disks/naa.60000970000292602937533030454137:1 4fe20363-e8569eca-086c-9c8e992159ca  0  SAN_INT_PRD_04_012
naa.60000970000292602937533030443237:1     /vmfs/devices/disks/naa.60000970000292602937533030443237:1 4fb3a5a4-e31b3b1a-4658-9c8e99212c9a  0  SAN_INT_PRD_04_003
naa.60000970000292602937533030443337:1     /vmfs/devices/disks/naa.60000970000292602937533030443337:1 4fb3a5c9-66e45f1a-fca8-9c8e99212c9a  0  SAN_INT_PRD_04_005
naa.60000970000292602937533030443946:1     /vmfs/devices/disks/naa.60000970000292602937533030443946:1 4fe20364-fbec191a-0e16-9c8e992159ca  0  SAN_INT_PRD_04_010
naa.60000970000292602937533030444137:1     /vmfs/devices/disks/naa.60000970000292602937533030444137:1 4fe20364-0f5a1542-eebd-9c8e992159ca  0  SAN_INT_PRD_04_011
naa.60000970000292602937533030433441:1     /vmfs/devices/disks/naa.60000970000292602937533030433441:1 4fd5ce5b-1996e5a0-2408-9c8e99212c9a  0  SRM Placeholder Prd04 VM
naa.600508b1001c70a97200dd904835f9f5:3     /vmfs/devices/disks/naa.600508b1001c70a97200dd904835f9f5:3 4f61db01-740d0e6e-b71b-9c8e99213839  0  ukesxhostv014_Local
naa.60000970000292602937533031304437:1     /vmfs/devices/disks/naa.60000970000292602937533031304437:1 4ff44e6e-91854b17-6731-001f290d1a2c  0  CDCW_MIG_SDC01_VC101_01
naa.60000970000292602937533031304446:1     /vmfs/devices/disks/naa.60000970000292602937533031304446:1 4ff44e6e-91854b17-6731-001f290d1a2c  1  CDCW_MIG_SDC01_VC101_01
naa.60000970000292602937533031304237:1     /vmfs/devices/disks/naa.60000970000292602937533031304237:1 4ff44c7e-d20d9c83-0dfa-001a4bae6720  0  CDCW_MIG_SDC01_VC103_01
naa.60000970000292602937533031304246:1     /vmfs/devices/disks/naa.60000970000292602937533031304246:1 4ff44c7e-d20d9c83-0dfa-001a4bae6720  1  CDCW_MIG_SDC01_VC103_01
naa.60000970000292602937533031304337:1     /vmfs/devices/disks/naa.60000970000292602937533031304337:1 4ff44fd8-78c5b63b-bc33-001f2957c41c  0  CDCW_MIG_SDC02_VC201_01
naa.60000970000292602937533031304346:1     /vmfs/devices/disks/naa.60000970000292602937533031304346:1 4ff44fd8-78c5b63b-bc33-001f2957c41c  1  CDCW_MIG_SDC02_VC201_01
0 Kudos
kastlr
Expert
Expert

Hi,

I assume you already tried rescan/refreshing your SRA Providers.

Regards,

Ralf


Hope this helps a bit.
Greetings from Germany. (CEST)
GuyTC
Enthusiast
Enthusiast

I have rescanned and rebooted SRM, Ralf - no change though. Just had VMware support looking at it - they checked the log files etc and say it all looks fine, but they can't explain it....they're blaming the Storage Provider 🙂 I have a call with EMC as well. I'm pretty confident SRM will work OK as the configuration is showing as correct at storage level - the only bits i dont like are the blue and the black boxes - they show a mismatch between PG (RDFG54) and Consistancy Group (RDFG52). I'll update the post if find out an answer....

0 Kudos
GuyTC
Enthusiast
Enthusiast

According to VMware "it is an SRM issue and is explicitly a UI defect. To that end, the fix will be rolled into the next major release of SRM."

0 Kudos