NickAC's Posts

Hi Andre So when you create a new LUN and you get the option to specify the LUN ID that is the internal LUN ID and can be higher than 255 Then when you add the LUN to a storage group to pre... See more...
Hi Andre So when you create a new LUN and you get the option to specify the LUN ID that is the internal LUN ID and can be higher than 255 Then when you add the LUN to a storage group to present it to the hosts you get the option to specify a Host LUN ID which can be different but must not exceed 255? Thanks for your help Nick
Hi I have a question on LUN numbering, We have Fibre Attached hosts to an EMC VNX 5500 I am looking at our existing setup and just getting a feel as i need to create some new LUNs shortly a... See more...
Hi I have a question on LUN numbering, We have Fibre Attached hosts to an EMC VNX 5500 I am looking at our existing setup and just getting a feel as i need to create some new LUNs shortly and i'm new to Fibre. On the EMC VNX we have a LUN that has been given the LUN ID of 200, however when i match the UUID to the LUN on the host within the Fibre Card on an ESX host it shows the same LUN as LUN ID 43, i'm guessing this is OK as its been in place for a long time and working. My question are Why does the LUN IDs not match the IDs on the storage array I know there is a 255 LUN ID limit in VMware, if i were to provision a LUN on the VNX side and give it a number higher than 255 would that be OK if VMware is going to give a lower LUN ID anyway. Many Thanks Nick
Thanks All Bayu....Just so i am clear, basically remove the recovery plan / dlete protection group within SRM, then at my Prod site just unmount the LUNs as you would normally by unmount then ... See more...
Thanks All Bayu....Just so i am clear, basically remove the recovery plan / dlete protection group within SRM, then at my Prod site just unmount the LUNs as you would normally by unmount then detach, at the recovery site because the LUNs are read only they are not actually connected to the hosts so i don't need to do anything except de-present the LUNs from the hosts on my EMC VNX array? Thanks Nick
Thanks, i will ensure i do that. Do you know how you go about removing the read only mirored LUNs from Vcentre?
Hi We currently have some Datastores that are used with SRM 5.0 Esxi 5.0 and VNX array based replication. We are re designing our SRM datastores and have migrated all VMs off of them onto o... See more...
Hi We currently have some Datastores that are used with SRM 5.0 Esxi 5.0 and VNX array based replication. We are re designing our SRM datastores and have migrated all VMs off of them onto others, i now need to remove them from Vcentre and depresent them from the hosts on the VNX which is not something i have done before. If it were a normal Datastore i would just unmount and then detach from each host, I'm guessing this is going to be the same with SRM Datastores however are there any other steps i need to do. My main unknown is regarding the mirrored LUNs that i will not be able to see in Vcentre to unmount / detach, how do i go about removing these, i have been caught out in the past with not unmounting a Datastore properly and very keen not to be in that position again Thanks Nick
Hi I am now starting look at our VM estate rather than just let it run and hope for the best!! I was looking at the metrics from ESXTOP and see some of the suggested values like 10 for %RDY... See more...
Hi I am now starting look at our VM estate rather than just let it run and hope for the best!! I was looking at the metrics from ESXTOP and see some of the suggested values like 10 for %RDY, I had a quick question on the scale that ESXTOP is displaying the numbers in. I am currently looking on one of our hosts at the VM's and see that the %RDY stats range from 0.13 up to 0.62 I wasn't sure If  0.13 is  below 1% or if that is actually 13% what's the scale?, as from my reading based on a single vCPU one is good the other bad. I was expecting the below but now i'm not so sure. 1.00 = 1% 13.00 = 13% Thanks
Hi Many Thanks for your reply, I did think that may be the case, thankfully all of our VM's have the VMware tools and we have a few P1 VM's and then the rest are all the default P3. I would... See more...
Hi Many Thanks for your reply, I did think that may be the case, thankfully all of our VM's have the VMware tools and we have a few P1 VM's and then the rest are all the default P3. I would be very interested in what you see with the VNX, From the testing we have done it appears that it does only re sync the deltas, I played around failing over 5 VMs that had a total of 300GB used data, when looking at the storage stage of the recovery steps. the initial failover, re-protect and then the failback and re-protect were all around the same, about 2 minutes each time, based on the fact that we have a 1Gb link connecting the two sites there is no way it could do a full synch of 300GB data in that small time window. The only difference between my testing and the actual failover test day is the business are insisting we cut the links between the 2 Data center's to simulate an actual site failure of our  on the day, Obviously during my test failovers the arrays never loose connectivity to each other, I wonder what happens if the link is lost. Regards Nick
Hi All We have SRM 5.0 protecting our VM's with EMC VNX array based replication in Asynchronous mode. To prove our DR capabilities we need to do a full failover (planned option) and run m... See more...
Hi All We have SRM 5.0 protecting our VM's with EMC VNX array based replication in Asynchronous mode. To prove our DR capabilities we need to do a full failover (planned option) and run machines live from our DR site for 2 hours. In preparation for this I have a test LUN that's in its own protection group / recovery plan to play around with but due to only having 5 spare licenses I cant create many VM's One of the questions I have is around the length of time to do the failover from the VMware / SRM perspective as the number of VM's protected increase (we have 130). If we exclude the storage sync requirements for now as i know that is going to be an unknown dependent on the rate of change. I created 2 test VM's on my Test SRM LUN, executed a recovery plan to failover my test group to the 2nd site, re-protected , Failed back to primary again and then finally re-protected so I was back in original state. I then created an additional 3 VM's so the total was 5 and then repeated the same procedure again. I noticed when looking at the recovery steps that most steps where similar in elapsed time, it looks as though on most steps like powering off and on VM's is roughly the same regardless of if you have 5 or 50 VMs, the only exception to this was  the Prepare Protected site VM's for migration which increased when the additional VMs were added. What's actually happening under the hood at this stage? is there a rough guide to allow you to calculate times, e.g allow 1 min per VM or anything like that? Also on a separate note has anyone done a failover / failback using a VNX array (Block Fibre) in Async mode using Mirrorview. We have spoken to EMC about the failback side of it, does it do deltas or full re sync and have had 2 conflicting answers back from them, this is an issue for us as we are doing failover and failback in 1 weekend, if its a full re sync then we just wont have the time window to re syncall the data. Thanks Nick
Hi All Logged a call with VMware If anyone is interested there was 2 config items that they advised we change storageprovider.FixRecoveredDatastores - tick to enable StorageProvider.Hos... See more...
Hi All Logged a call with VMware If anyone is interested there was 2 config items that they advised we change storageprovider.FixRecoveredDatastores - tick to enable StorageProvider.HostRescanRepeatCnt - change from 1 to 3 Going to try another failover this weekend and see what happens
Hi All We have a primary and DR site, bith sites have 5 ESXi 5.0 hosts and a separate Vcentre. We are running SRM 5.0 Storage is EMC VNX 5500, we are replicating asynchronously  with snap m... See more...
Hi All We have a primary and DR site, bith sites have 5 ESXi 5.0 hosts and a separate Vcentre. We are running SRM 5.0 Storage is EMC VNX 5500, we are replicating asynchronously  with snap mirror. We done a full SRM testing using a test LUN / Protection group at the weekend as we are planning a full DR test shortly, we chose the full test but slected the planned option (we did not want to do the isolated test where it does a snapshot and ring fences it) . Everything went fine, we failed the VM's over to our DR site, tested then after 20 minutes we re-protected and failed them back to our primary site. 2 Questions I have are A, it took 8 minutes to fail them over but 25 minutes to fail them back, seemed to spend a fair amount of time on stage “Preparing the protected VM’s for migration” , what's actually happening under the hood at this stage? B. After everything has been completed I looked a Vcentre and I see that the test LUN which was originally called TESTSRMLUN01 (original I know) however it is now called "snap-433ad3a8-TESTSRMLUN01", also on our DR side there is a LUN of the same name but status inactive, why has it renamed the Datastore and also why would the datastore still be showing on our DR side Vcentre, I would have thought as part of the reprotect and failback we did the LUN would have been de presented to DR side Thanks Nick