VMware Cloud Community
76dragon
Enthusiast
Enthusiast
Jump to solution

How to get the DR ESX host to see replicated Celerra Lun

Hi Guys, Im setting up SRM with 2 x NS20's and ive got as far as configuring the luns on production storage array, luns on DR storage array presented the appropriate luns to prod/dr esx servers. Ive setup replication and added the DR lun to the esx server making sure not to add it as it would write a signature. Ive then added the production lun to the production ESX server, Added storage which has obviously written a signature to the disk and with replication setup i was expecting to just issue a rescan on the DR esx host and have it see the datastore.

Am i missing something here ? Any help would be appreciated.

Thanks

0 Kudos
1 Solution

Accepted Solutions
bladeraptor
VMware Employee
VMware Employee
Jump to solution

Hi

"On the DR NS ive done exactly the same in regards to FS, Lun, iSCSI target and presented this to the DR ESX host, ive done a rescan and its picked up the "Read Only" lun thats being replicated to but under the summary tab within VC i can only see the local VMFS file system and i expected to see the replicated LUN listed with the local datastore or is this only seen when SRM changes the Lun properties during a fail over ?"

You won't see the Recovery Site volumes listed in the datastores unless you do one of two things - a failover in which case the the production side will go read-only and be removed from the datastore list on the Production side while exposing the now read-write VMFS datastore on the recovery side.

Or you use the test function in which case a temporary writeable snap will be promoted from the Celerra and this will appear as a SNAPed datastore in the list of VMFS datastores on the Recovery Side.

However it would appear that you are not seeing this outcome as the DR array does not appear to be configured as it should be.

Can the Production side VC communicate happily with the Recovery site Celerra - i.e can you launch a PUTTY session from the Production side VC and connect to the Recovery Side Celerra. Can the Recovery Side VC do the same thing to the Production Side Celerra?

I take it that you get the peer unknown after SRM does the long array discovery that holds at about 23% forever and then completes. The only other thing i can think of and am not sure about this as a fix as it is a set of actions I have done anyway on my Recovery side Celerra is to create a small filesystem and in it a small iSCSI LUN on the Recovery Side from the Recovery Side Celerra and bring this into your Recovery side ESX cluster - create a VMFS filesystem on it and place a VM in that VMFS filesystem - this is to check that there are no issues with the recovery side ESX hosts working with the Recovery Side Celerra

My guess would be an IP connectivity thing - though its a bit strange if the connection between the Production Side VC and the Recovery Side VC has already been established successfully

I will forward this to an SRM colleague at VMware and see if he has any thoughts

Regards

Alex Tanner

View solution in original post

0 Kudos
5 Replies
bladeraptor
VMware Employee
VMware Employee
Jump to solution

Hi

I am writing as an EMC employee.

Can you confirm that you did the following

Created an Celerra filesystem on both the protection and recovery sides with enough space to provide for your iSCSI LUN and probably 2.5x that for snaps and presenting a Temporary Writeable Snaps (TWS) up to the recovery side?

So I would typically for a 5GB iSCSI LUN built in a Celerra Filesystem want to make the filesystem at least 10GB if not 15GB - depending on the number of snaps I might want to present to hosts.

Create an iSCSI LUN in the production filesystem you just created. Creat a 'read-only' iSCSI LUN in the recovery filesystem of the same size. Having the same iSCSI LUN ids here helps with administration

Having created the replication relationship between the Celerras and set up the connections between the two units - create the replication session and make sure it is functioning

Ensure you have created an iSCSI target on both celerras and enabled the iSCSI functionality

Mask the production side ESX servers to the iSCSI LUN.

Once that is done rescan the ESX hosts to let it see the iSCSI LUN (on the basis that you have already configured the ESX iSCSI initiator on the same subnet (or routable) through to see the iSCSI target

Create a VMFS filesystem on the iSCSI LUN or map the LUN as an RDM.

Mask the iSCSI LUN on the recovery side to the ESX hosts. On the recovery side ideally rescan the hosts and pull the iSCSI LUN details onto the ESX hosts. Clearly as this iSCSI LUN should be read only you will not be able to create a filesystem - do not worry about the signatures as SRM takes care of this for you.

Now with the filesystems created, the iSCSI LUN pair defined and replicating, the LUNs presented to both sets of ESX hosts and a VMFS filesystem or RDM built on the production side and a VIrtual Machine placed in the VMFS partition (otherwise SRM will not recognise the LUN) (or the RDM mapped to a Virtual Machine) you should be able to see the configuration through the SRM array configuration wizard on the production side?

By providing enough space in the filesystems for the iSCSI snaps to be created you should then be able to run test jobs. But be aware if you do not have enough space built into the filesystems on which the iSCSI LUNs are built for the Celerra to create the snaps and successfully promote them to the recovery host as full fat temporary writeable snaps (TWS) - then the SRM test job will fail.

Celerra TWS can be provsioned thinly but this is a system wide setting and like all thin provisoning should be used with caution

Let me know if that helps

Regards

Alex Tanner

76dragon
Enthusiast
Enthusiast
Jump to solution

Hi Alex,

Thanks for the reply. Looking at my post i might not have been as clear as i should have been, ill list the steps ive taken below to help.

As you noted created a FS on the production NS and created a lun within the FS, created a iSCSI target,masked and presented it to the Production ESX host, added storage formatting with VMFS and placed a virtual machine on the datastore.

Ive configured replication via the Wizard in Celerra manager and beleive this is working as designed, status is "OK" and if i was to try and add the replicated lun to the DR lun ESX server, it knows there is already a VMFS on the disk apposed to a blank Lun which leads me to beleive that the replication is setup ok.

On the DR NS ive done exactly the same in regards to FS, Lun, iSCSI target and presented this to the DR ESX host, ive done a rescan and its picked up the "Read Only" lun thats being replicated to but under the summary tab within VC i can only see the local VMFS file system and i expected to see the replicated LUN listed with the local datastore or is this only seen when SRM changes the Lun properties during a fail over ?

The production ESX server is only connected to the Production NS and the DR ESX server is only connected to the DR NS.

Ive configured SRM and tried running a test, and it fails on the storage side with the error in the jpg file attached, hopefully you might be able to point me in the right direction. Ive added both Celerra's to the array section in SRM and both were added without issue i did how ever note that for the production system it lists the dr system in PEER information but when when i added the DR system it has "unknown" in the peer field.

Once again thanks for the reply and look forward to any suggestions you might have.

0 Kudos
bladeraptor
VMware Employee
VMware Employee
Jump to solution

Hi

"On the DR NS ive done exactly the same in regards to FS, Lun, iSCSI target and presented this to the DR ESX host, ive done a rescan and its picked up the "Read Only" lun thats being replicated to but under the summary tab within VC i can only see the local VMFS file system and i expected to see the replicated LUN listed with the local datastore or is this only seen when SRM changes the Lun properties during a fail over ?"

You won't see the Recovery Site volumes listed in the datastores unless you do one of two things - a failover in which case the the production side will go read-only and be removed from the datastore list on the Production side while exposing the now read-write VMFS datastore on the recovery side.

Or you use the test function in which case a temporary writeable snap will be promoted from the Celerra and this will appear as a SNAPed datastore in the list of VMFS datastores on the Recovery Side.

However it would appear that you are not seeing this outcome as the DR array does not appear to be configured as it should be.

Can the Production side VC communicate happily with the Recovery site Celerra - i.e can you launch a PUTTY session from the Production side VC and connect to the Recovery Side Celerra. Can the Recovery Side VC do the same thing to the Production Side Celerra?

I take it that you get the peer unknown after SRM does the long array discovery that holds at about 23% forever and then completes. The only other thing i can think of and am not sure about this as a fix as it is a set of actions I have done anyway on my Recovery side Celerra is to create a small filesystem and in it a small iSCSI LUN on the Recovery Side from the Recovery Side Celerra and bring this into your Recovery side ESX cluster - create a VMFS filesystem on it and place a VM in that VMFS filesystem - this is to check that there are no issues with the recovery side ESX hosts working with the Recovery Side Celerra

My guess would be an IP connectivity thing - though its a bit strange if the connection between the Production Side VC and the Recovery Side VC has already been established successfully

I will forward this to an SRM colleague at VMware and see if he has any thoughts

Regards

Alex Tanner

0 Kudos
76dragon
Enthusiast
Enthusiast
Jump to solution

Thanks Alex, I managed to get this all up and running over the weekend and as you noted, the lun only gets presented during a failover, during the "Test" i noted the pre suffix of "snap" under the summary/datastore area.

Thanks very much for the help, much appreciated !

0 Kudos
bladeraptor
VMware Employee
VMware Employee
Jump to solution

Hi

Glad to hear it all came good

Let me know if you encounter any other issues

Many thanks

Alex Tanner