VMware Cloud Community
MMRNOLA
Contributor
Contributor
Jump to solution

SRM 5 - VMFS/NFS and RDMs

3x vSphere 5 update 1 Hosts

SRM 5

NetApp PROD = Cluster of FAS3240s (8.02) DR = Single FAS2040

Our virtual cluster uses NFS Datastores. We hired an engineer to come in and deploy SnapManager for SharePoint and SQL. During this deployment, they setup 4 of our servers (SQL and SharePoint farm) with RDMs. During the install we spun up a VMFS (iSCSI) datastore to store the RDM Mapping files on. Everything was fine.

Recently we started a deployment of SRM 5 was told that RDMs are handled natively through SRM during the recovery process. Come to find out, it is, and it isn't. By using guests on NFS datastores and RDM mappings on a seperate datastore we were not able to add a guest into a protection group without seeing errors due to the RDMs coming back with a status of "not replicated", even though we can see the snapmirror relationship for the RDMs AND Mapping datastore in the Array Manager . From what i've read, in order for SRM to properly handle a guest with RDMs, the RDMs need to be located on the SAME datastore that the guest resides on, as well as the RDM Mapping files. Since SnapDrive will not let you store these mapping files on an NFS datastore, we could not accomplish a fully automated "SRM' recovery without the use of custom scripting via the SnapDrive CLI and SQL commands.

So here's my question. What are the pro's and con's of us running a mixed environment concerning datastores? Through testing, I provisioned a VMFS (iSCSI) datastore with a test guest. I then attached 2 RDMs local to this datastore and then stored the RDM mapping files there as well. Created a protection group with no issue. Recovery went through without a problem.

Our current setup is:

SQLSERVER1:

Guest = NFS Datastore 1

RDM1 = NetApp Volume 1

RDM2 = NetApp Volume 2

RDM3 = NetApp Volume 3

RDM4 = NetApp Volume 4

RDM Mapping Files = NetApp VMFS Volume 1

Proposed setup:

SQLSERVER1

Guest - VMFS Datastore 1

RDM1 - VMFS Datastore 1/Qt1/Lun1

RDM2 - VMFS Datastore 1/Qt2/Lun1

RDM3 - VMFS Datastore 1/Qt3/Lun1

RDM4 - VMFS Datastore 1/Qt4/Lun1

** qt = qtree

In the proposed example, snapmirror would be done at a qtree level verses volume level as it's done now.

Any advice is appreciated.

Reply
0 Kudos
1 Solution

Accepted Solutions
GreatWhiteTec
VMware Employee
VMware Employee
Jump to solution

You can create a VOL/LUN (VMFS) for your guest. Storage vMotion the guest to the new VMFS datastore. If you want, you can re-map the RDM pointers to live under this same VMFS datastore. That is the approach I took for this same scenario.

Are you doing Volume level snapsjmirrors or Qtree based? Best practices will tell you to create one VOL per LUN.

Proposed setup:

SQLSERVER1

Guest - VMFS Datastore 1 (Storage vMotion)

RDM1 - VMFS Datastore 1 for Pointer File, RDM location = /VOL1/Lun1

RDM2 - VMFS Datastore 1 for Pointer File, RDM location = /VOL2/Lun1

RDM3 - VMFS Datastore 1 for Pointer File, RDM location = /VOL3/Lun1

RDM4 - VMFS Datastore 1 for Pointer File, RDM location = /VOL4/Lun1

View solution in original post

Reply
0 Kudos
9 Replies
GreatWhiteTec
VMware Employee
VMware Employee
Jump to solution

You can create a VOL/LUN (VMFS) for your guest. Storage vMotion the guest to the new VMFS datastore. If you want, you can re-map the RDM pointers to live under this same VMFS datastore. That is the approach I took for this same scenario.

Are you doing Volume level snapsjmirrors or Qtree based? Best practices will tell you to create one VOL per LUN.

Proposed setup:

SQLSERVER1

Guest - VMFS Datastore 1 (Storage vMotion)

RDM1 - VMFS Datastore 1 for Pointer File, RDM location = /VOL1/Lun1

RDM2 - VMFS Datastore 1 for Pointer File, RDM location = /VOL2/Lun1

RDM3 - VMFS Datastore 1 for Pointer File, RDM location = /VOL3/Lun1

RDM4 - VMFS Datastore 1 for Pointer File, RDM location = /VOL4/Lun1

Reply
0 Kudos
MMRNOLA
Contributor
Contributor
Jump to solution

We use volume level snapmirror for our RDMs and Datastores.

So by looking at your response, the issue is not that the RDMs are on seperate volumes, but that the pointers are correct?

This would save me a ton of work if so.

Appreciate the response.

Any gotcha's I should be aware of by going to VMFS verses NFS for my SQL servers? My VMWARE experience has been primarily NFS. 

Reply
0 Kudos
GreatWhiteTec
VMware Employee
VMware Employee
Jump to solution

Like I said, I ran into the same problem a couple of years ago. This fixed it for me, and SRM works as it should (used SRM 4 and 5). By going to VMFS you will actually gain some performance and a get a few extra features. It is a good idea to test the process to 1)make sure it will work in production and 2) helps you get use to the process so the production migration is not so "intense" :smileysilly:.

NetApp's SRA is still a little buggy. Waiting on the new SRA version.

Good luck!

Reply
0 Kudos
MMRNOLA
Contributor
Contributor
Jump to solution

many thanks dvdmorera! I will let you know what happens. Going to set this all up now and test!!

Reply
0 Kudos
MMRNOLA
Contributor
Contributor
Jump to solution

This is what I setup for testing:

Testing setup:

SRMTEST1

Guest - VMFS Datastore 1

RDM1 - VMFS Datastore 1 for Pointer File, RDM location = /VOL1/Lun1

RDM2 - VMFS Datastore 1 for Pointer File, RDM location = /VOL2/Lun1

I then ran a refresh in the array manager and everything picked up fine. Created the protection group with no warnings. Created the recovery plan and then executed a recovery. Immediately it came back with the following error:

1. Pre-synchronize StorageError - Failed to sync data on replica devices. A storage operation requested on unknown storage device '/vol/RDM1_DR/qt1/lun1'.

Now when I go back into the Array Manager I see the following errors on both of the RDMs:

Device '/vol/RDM1_DR' cannot be matched to a remote peer device

Device '/volRDM2_DR' cannot be matched to a remote peer device

Device '/vol/RDM1/qt1/lun1' cannot be matched to a remote peer device

Device '/vol/RDM2/qt1/lun1' cannot be matched to a remote peer device

RDM*_DR is the volume on the recovery site (DR) side.

The Array Manager can see the snapmirror relationships for both volumes that are used for the RDMs but it throws this error. Any ideas?

Reply
0 Kudos
GreatWhiteTec
VMware Employee
VMware Employee
Jump to solution

I am guessing you are using SRA 2.0? Try to refresh under Array Pairs and also under the Devices tab. I have noticed that I had to tell the SRA to sync up with the same as it would not see the relatioships correctly.

Reply
0 Kudos
MMRNOLA
Contributor
Contributor
Jump to solution

you sir are an effin saint!! I never refresh the Array pairs, only the devices.. now i'm golden!! If you were here I would buy you a drink!

Off to a recovery test!

Reply
0 Kudos
MMRNOLA
Contributor
Contributor
Jump to solution

Bazinga!! All is well! I was able to recover, reprotect, recover, and reprotect this test VM! Many thanks for the help!!

I guess moving forward the only thing I am unsure of is how to move the RDM pointer files local to the VMFS datastore the guests will reside on. Would this be accomplished via SnapDrive? As in some downtime along with disconnecting and reconnecting the LUNs and making the change then?

Thanks again!

Reply
0 Kudos
GreatWhiteTec
VMware Employee
VMware Employee
Jump to solution

Moving the pointer filer is a little tricky but not all that bad once you do it, so if you are thinking about doing test first. What you do is first you need to remove the Hard Disks that are RDMs from the VM. At this point you can browse the datastore where they reside and delete the pointer files. Then re-add the Hard Drive as RDM>select the LUN for this RDM>choose the datastore (local to VM)>complete other options, and that's it.

You don't have to move them. RDMs can reside in a separate datastore, but I prefer to keep them together as it makes it easier for administration. There would be down time as you have to remove the drives from the VM. No SnapDrive as the data will stay on the same LUNs, all you are doing is changing the location of the pointer files.

Glad everything is working.

Reply
0 Kudos