mdhacke
Contributor
Contributor

Preventing write access to a datastore

Jump to solution

All,

Hopefully someone can advise me on this issue?

The scenario:

We have presented a new LUN to our ESX 4.0 cluster, configured as a VMFS datastore and used storage vmotion to migrate a number of virtual machines onto this datastore. Each of the VMs is powered off. The LUN is presented from an IBM DS8100 storage array. At a specified point in time, the LUN is migrated to a remote datacenter, where we intend to present the LUN to an alternative cluster and register the VMs on that LUN. This provides a method of migrating VMs between remote sites.

The problem is when we test this the storage guys advise that there are out of sync tracks in the replicated data, which they believe is due to the LUN being written to during the replication (which seems unlikely as all of the VMs on that LUN were shutdown). I believe the issue is at the storage end but have been asked to determine if there is a method of completely isolating the LUN from the source ESX cluster to ensure that no data is written during the migration.

The methods I have considered are as follows:

1) Delete the datastore - not a viable option as it would destroy all data on the LUN.

2) Disable all paths to the datastore using the vSphere client. This looked a possibility - I can disable 3 of the 4 paths from an ESX host, but get an error when I try to disable the fourth.

3) Use esxcfg-mpath to disable the paths - not convinced this will work any different to using the GUI.

4) Get the storage guys to use LUN masking to prevent any ESX hosts from 'seeing' the LUN. My concern with this is how ESX will react if it can't see the datastore.

My question is, does anyone know of any method I can use to ensure that this LUN cannot be written to during the migration to the remote site?

Many thanks in advance,

Martin

0 Kudos
1 Solution

Accepted Solutions
AntonVZhbankov
Immortal
Immortal

Yes, ESX will complain, but once you connect LUN back - everything will be ok.

Remove VMs from inventory (but do not delete from disk) before disconnecting LUN and ESX will not say anything.


---

MCSA, MCTS, VCP, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, MCITP: SA+VA, VCP 3/4/5, VMware vExpert http://blog.vadmin.ru

View solution in original post

0 Kudos
6 Replies
AntonVZhbankov
Immortal
Immortal

It's called zoning and masking. SAN engineers should know in details, ask them.


---

MCSA, MCTS, VCP, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, MCITP: SA+VA, VCP 3/4/5, VMware vExpert http://blog.vadmin.ru
mdhacke
Contributor
Contributor

Thanks Anton, I appreciate the reply.

I guess what I'm really asking is would ESX throw any errors if the LUN was masked?

Regards,

Martin

0 Kudos
AntonVZhbankov
Immortal
Immortal

Yes, ESX will complain, but once you connect LUN back - everything will be ok.

Remove VMs from inventory (but do not delete from disk) before disconnecting LUN and ESX will not say anything.


---

MCSA, MCTS, VCP, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, MCITP: SA+VA, VCP 3/4/5, VMware vExpert http://blog.vadmin.ru

View solution in original post

0 Kudos
mdhacke
Contributor
Contributor

Many thanks for your prompt answers to my question Anton. Looks like removing the VMs from the inventory then using LUN masking is the best approach.

Martin

0 Kudos
jpdicicco
Hot Shot
Hot Shot

I suggest you look at the instructions for Unpresenting a LUN from ESX, and use their tool for masking the LUN on the ESX side. If the VMs are shutdown, you shouldn't get complaints from them. And, this will prevent the ESX hosts from panicking about a missing LUN. I'm not 100% sure it will work, but it seems that it should.

I would follow all of the steps, skipping just 9 (removing the LUN from the hosts via LUN masking on the array) and make sure to do a rescan when you're done. This can all be done in a shell script on ESX.

Please post results if you test this.

Happy virtualizing!

JP

Please consider awarding points for correct and/or helpful answers

Happy virtualizing! JP Please consider awarding points to helpful or correct replies.
mdhacke
Contributor
Contributor

Excellent, many thanks.

If we go down this route I'll certainly post the results.

Martin

0 Kudos