VMware Cloud Community
rgv75
Enthusiast
Enthusiast

SRM 5 and NetApp SRA 2.0 with NFS Datastores

I thought I'd share this information for those contemplating of upgrading or running SRM 5 and using NetApp Storage Adapter 2.0.  If you are using NFS datastores in your vSphere 5 Cluster, there is currently a bug in the NetApp SRA 2.0 whereby NFS datastores mounted using FQDN is not recognized by the SRA.  It's stated in NetApp Bug Detail 574021 (http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=574021). The workaround is to mount your NFS datastores using IP address.  In our case, this is not a workaround, so our deployment of vSphere5 in our environment will be delayed until we get the bug fixed on the SRA.

NetApp Technical Support said it looks like the new release of the NetApp SRA is scheduled to be released in September 2012, but I have to confirm this with Sales because folks in the NetApp TSC are not privy to release dates (so I was told by the NetApp Engineer I was working with on this issue).   Good luck.

Reply
0 Kudos
10 Replies
EdWilts
Expert
Expert

I'm responding to the only post in the thread several months later but I'm wondering if I'm hitting the same bug - the SRA simply doesn't work for me and NetApp has been working on this issue for over a week but it seems like they're just stabbing in the dark.  I can get the Array Manager to see the snapmirrored volumes but can't create a protection group.

Have you received any updates from NetApp on this issue?  I've passed the bug number you identified on to NetApp but haven't heard back from them yet.

.../Ed (VCP4, VCP5)
Reply
0 Kudos
rgv75
Enthusiast
Enthusiast

It sounds like you're hitting the same bug.  To verify it, create a NFS connection to your filer but using the IP address of the filter instead of its hostname, then try creating a Protection Group again. To date, we have not received a bug fix on the issue.  We have decided to use the IP address workaround so we can migrate to vSphere 5.  We are now in production parallel of vSphere5 with NFS and SRA 2.0 without any issues. 

When I found the Bug ID 574021 and reported it in my ticket, I got no response as well.  Eventually, they responded and said there is not fix to our issue, but the workaround is to use the IP address.  If you get a fix, please do me a favor and update this post.  Thanks.

Good luck.

Reply
0 Kudos
EdWilts
Expert
Expert

Yup, I hit the bug.  I created a test datastore, mounted it via IP, and I can now create protection groups.  I'm escalating within NetApp to get a formal response to the bug.

NetApp tech support never mentioned the bug or suggest that I mount by IP.  Thankfully I found your post that gave me the info I needed to go forward.

.../Ed (VCP4, VCP5)
Reply
0 Kudos
rgv75
Enthusiast
Enthusiast

I spoke to our VAR/Integrator that resells our VMware and NetApp licenses.  They said majority of their clients connect NFS datastores using the IP address.  From this reason, we decided to move forward and use the IP address.  I'm not sure if this works in your environment, but it doesn't look like NetApp will be rewriting the SRA anytime soon to fix the FQDN issue.

Reply
0 Kudos
vSitta
Contributor
Contributor

NetApp & vmware released SRA 2.0.1 on 10 September 2012. Now today we start installing & try if it works with NFS Datastores mounted via FQDN. Bye

Hope it 'll be helpfull

Davide Sitta (vSitta)

www.sinergy.it

Davide Sitta www.sinergy.it
Reply
0 Kudos
rgv75
Enthusiast
Enthusiast

vSitta, please let us know if SRA 2.0.1 fixes the FQDN issue.  Please post back your results.  Thanks!

Reply
0 Kudos
EdWilts
Expert
Expert

I installed the new SRA this morning.  It did not help and NetApp has been given the new support bundle.  I'm waiting to hear back.

There's nothing in the release notes that says that this issue is fixed.  About the only thing I see is that they now deliver a 64-bit application for SRM 5.1 and a 32-bit application for SRM 5.0.  Perhaps they fixed it in the 64-bit version but not the 32-bit version?  It's hard to tell, because it's not in the release notes for that either.

None of the perl modules in the sra\ONTAP folder have been modified since June.

Here's the complete content of the "What's new" section:

What is new in this release

Storage Replication Adapter 2.0 supports all the workflows in Site Recovery Manager 5.0, such as

discovery of arrays and replicated devices, test recovery, recovery (planned migration and disaster

recovery), and reprotect. It also supports automatic creation of igroup and volume filtering.

NetApp Disaster Recovery Adapter is now known as NetApp FAS/V-Series Storage Replication

Adapter.

For more information about Storage Replication Adapter 2.0, see the

NetApp FAS/V-Series Storage

Replication Adapter 2.0 Installation and Administration Guide.

.../Ed (VCP4, VCP5)
Reply
0 Kudos
vSitta
Contributor
Contributor

Unfortunately, even with version 2.0.1 the problem of export fqdn isn't solved. In the release notes of the SRA there's nothigg about. Truly, I say that with the plug in 32-bit (I've only tried SRM 5.0.1) errors that stopped the pairs enable were solved, but it was not yet possible to make a protection group. In fact, now SRM log the error in a table xml that there is a mismatch between the source and the destination datastore. Using the IP address to mount the volumes of export production source, the problem is solved.
Let's say that SRA 2.0.1 does not adjust the bug, makes it latent!

So solution:
export volumes explicit "rw" to the hosts in the cluster (PROD & DR)
mount volumes with IP address of vFiler and not with FQDN
and everything runs perfectly.

Davide Sitta www.sinergy.it
Reply
0 Kudos
EdWilts
Expert
Expert

Well guys, I got it to work.  There are some caveats that are not present if you use IP addresses.

To pull this datastore in to Site Recovery, the following requirements have to be met:

1.  The 'NFS Addresses' section must have the exact FQDN/Shortname used by ESXi to mount the NFS datastore.  This has to match exactly or it will be excluded.

2.  The volume must be added to the 'volume include list' (NFS).  Again, the device will be excluded if this is not fulfilled.

The same needs to be satisfied for the Recovery Array Manager's devices.

At this point, you should be able to run a 'refresh' on the devices section of Array Managers and see your DNS mounted NFS datastore and continue with creating a Protection group.

I have requested that a BURT be filed to reflect that the fact that this is radically different than if you use IP addresses and contradicts what the Array Manager setup screens say.  I have also told them to rethink the entire SRA setup since ALL of the information they need is already present in vCenter and the VSC.  In the VSC, we've already provided the controller credentials and it knows about the vFiler relationships.  It can talk to the filer and get the snapmirror relationships.  In vCenter, it's easy to get the list of the datastores.  There is no NEW information in the NetApp SRA yet this stumps everybody, including NetApp support rep, in the configuration.  Why make it hard on everybody?

.../Ed (VCP4, VCP5)
Reply
0 Kudos
vSitta
Contributor
Contributor

Hi guys!

Here we go.

Another question solved is that if you are using NFS datastore mounted as NetApp QTREE,

i.e.:

123.123.123.101:/vol/volname/qtreename

where 123.123 is the ip address of the Filer and the datastore monted is the qtreename

remember to include in the volume include list in the GUI of the SRA 2.0 array pairs only the volume name.

but remember also to check the correct presence of the volume in the Filer's snapmirror.conf.

I mean that if you are using some advanced feature (like NetApp Data Fabric Manager alias DFM) to implement and make use of the NetApp QTree techmology, don't worry about the way you have mounted the NFS Datastore on the ESXi hosts.

Even if you mount the qtree & net the main volume, the SRA will accept the volume name (i.e. volname) and also the qtree name (qtreename).

We have tried also the two things toghether:

i.e.

imput box of array pair of the SRA:   volname,qtreename

comma separated values.

the enable functions works perfectly....

but remember to compile correctly the Filer /etc/exports

vSitta

Davide Sitta www.sinergy.it
Reply
0 Kudos