VMware Cloud Community
didierhurpet
Contributor
Contributor

SRM 5, EMC Celerra, NFS : array pairing fails when using non standard devices

Hello,

I have some troubles with SRM 5, SRA 5 on Celerra in NFS Mode.

When i use standard devices like cge0, fxg0,fxg1 everything goes well. Pairing, test, failover, reprotect, failback...

OK NFS on fxg.PNG

When i build fail safe network devices (or link aggregation, ...), array pairing does not work, so nothing else is available.

Storage port not founds with FSN Devices.PNG

Did someone have it work ? or fail ?

Didier

Reply
0 Kudos
10 Replies
AaronUDI
Contributor
Contributor

did you every get this working? I can get the two Celerras paired, but when i go to create a protection plan the datastores don't show up. they do show up under array managers. just when i select san based replication under protection plan nothing show up.

Thanks.

Reply
0 Kudos
didierhurpet
Contributor
Contributor

Aaaron,

Yes, i got it fully working on NFS with standard devices (rptection groupe , recovery plan, test mode, cleanup, ...) Nice.

When trying to use NFS on LA or FSN devices, i couldn't pair arrays, so i couldn't go to your step.

When trying to use iSCSI, i can pair arrays, see datastore, create PR and RP, but can't run Test mode : fails with something like "unable to create snaphot at the recovery site"

Do you have some VMs in your replicated datastores ?

If there is no VM inside, the datastore does not show up.

When i see your problem, i think it should bee the main possible reason.

Reply
0 Kudos
AaronUDI
Contributor
Contributor

Thanks.

Everything is NFS, no ISCSI stuff. Also, have 20 or so VMs across 3 NFS shares that are being replicated. they are being replicated via IP replicator from a NS20 to and NS-120. the DART is 6.x. It worked fine in SRM 4.1. I'm doing a clean install and rebuild for SRM5.

Reply
0 Kudos
didierhurpet
Contributor
Contributor

Aaron,

For now, EMC says me i've to upgrade Dart to .41 at least (wich seems required or certified for vSphere 5).

I'll have it done on monday and i will post there about the result.

What adapter do you use with SRA : 5.0.1 (September) or 5.0.2 (October) ?

Reply
0 Kudos
AaronUDI
Contributor
Contributor

i have already upgraded to: 6.0.43-0 and i had to use SRA 5.0.1 (september). 5.0.2 would not install.

Reply
0 Kudos
cfresqui
Enthusiast
Enthusiast

How did you get the Celerra SRA for VMware SRM5?

Cesar Fresqui VCAP-DCA Por favor, não esqueça de atribuir os pontos se a resposta foi útil ou resolveu o problema. Thank you/Obrigado
Reply
0 Kudos
didierhurpet
Contributor
Contributor

@ cfresqui

you have to use the VNX one

just see http://www.vmware.com/pdf/srm_storage_partners.pdf on page 9 and more

SRA may be downloaded at vmware

but you need an adapter that you have to download at emc powerlink : powerlink.emc.com

when you install SRA it says you where to download the adapter

Reply
0 Kudos
AaronUDI
Contributor
Contributor

I just spoke to VMware Support. It looks like the SRA is seeing the NFS file systems, but can not see the Datastores on the file systems. They suggested that I add the NFS IP to the array config. We did that but then got an error: "Internal error: std::exception 'class Dr::Xml::XmlValidateException' "Expected element 'StoragePorts' not found"." This seems to be the root of my issues. I'm going to open a case with EMC and see what they say.

Also, we see this error in the logs:

--> 2011-10-24 13:55:16,678 [com.emc.util.net.SSHConnection]: 172.x.x.x Command result: stdout: (Error 2100: usage: nas_cel
--> -list
--> | -delete { <cel_name> | id=<cel_id> } [-Force]
--> | -info { <cel_name> | id=<cel_id> }
--> | -update { <cel_name> | id=<cel_id> }
--> | -modify { <cel_name> | id=<cel_id> }
--> { [-passphrase <passphrase>] [-name <new_name>] [-ip <ipaddr>] }
--> | -create <cel_name> -ip <ipaddr> -passphrase <passphrase>
--> | -interconnect <interconnect_options>

Reply
0 Kudos
didierhurpet
Contributor
Contributor

Aaaron

I thought you already added NFS IP adress so i didnt' write about it. Sorry.

Now, you are facing the same problem as i do.

My SRM case has been logged  at EMC  for two weeks and no answer for now.

I think you are using an 'home made' device such as FSN, LA and not a standard device such ax cge0, ...

For me the 2100 error on nas_cel command is not the cause as i have this error also when i'm using a standard NFS device (fxg0) and when everything goes well.

Didier

Reply
0 Kudos
bneumann
Contributor
Contributor

Thanks for the feedback.  The issue you mentioned with FSN devices is now resolved in the VNX Replicator Enabler and will soon be posted to Power Link. I will be sure to let you know when it has been posted.  If you need further details don't hesitate to send me a PM with your contact info.

-Bryan

Reply
0 Kudos