Please see the visio drawing attached to make it easier for you to understand.
Will there be any issues if I have 1 Vshpere DC and 2 ESX Clusters with the same set of shared storage for both clusters.
i.e. LUN 1 presented to both clusters at the same time..... not LUN1 presented to ESX Cluster1 and LUN2 Presented to ESX Cluster2.
Although if there are any issues with the above deisgn, we can present storage individually to the 2 ESX Clusters.
Basically there shouldn't be any issues with presenting the same LUN to hosts in different clusters in the same VC environment. Technically you could even present the LUNs to hosts in totally different environments (although not recommended).
The only thing I'm not 100% sure about is when "Datastore Heartbeating" (new HA feature in vSphere 5.0) comes into play.
The VC instance keeps track of the datastores, RDMs, VMs, Templates, ... With presenting a LUN to different VC's it's your responsibility to ensure that e.g. VMs are accessed/registered to only one host, RDM-LUNs are only used once, etc. There may also be the need for additional attention if the LUNs are modified (e.g. reszized).
Good Question Troy,
This is to do with our Backup and DR design
Basically what we will be doing is we will be backing up VM's from each Cluster (50%loaded) each site i.e. vDC1 and vDC2 - Cross site
and the the VM's will sit on a (nexsan) storage at each site (DC) (backed up, powered off) as they are backed up so if we loose one site we can very easily switch on the VM's on the other..we understand that it wont be as fast as the SAN but it will keep the business running.
But if it is one single cluster ; DRS would cause issues as it would start Vmotioing machines from one DC to another which we dont want because of the above Backup-DR design, we wont be able to control which VM;s sit where (well we can, but it would defat the purpose of DRS, wont it:?.
And only reason we would like to the storage presented to both clusters is flexibilty and if it goes against best practise, we certainly dont want to be doing that,
I hope 'why' is clearer
Last week i was thinking about nearly the same thing!
We have to DCs, each with a vSphere Environment and a EMC SAN. Both vCenters are connected via Linked Mode.
Sometimes we going to move some VMs from one DC to the other to adjust the load of the DCs. Until now I did this via command line via ssh / scp. So I thought if boths DCs would see all LUNs (form both DCs) than I could move the VM via storage vMotion and than bring it to a host on the other site. Of course a reboot would be nesessary because vMotion is not an option via seperate cluster / vDCs.
By thinking about it I, I didnt find a reason why it shouldnt work from an technical perspective. But I ran into an issuse: If I added the IP adresses of the SAN in the other DC to the software iSCSI adapter and did a rescan the host never finishes the rescan task an the latency of the LUNs in the other DC begin getting very high like theres a heavy load on it. After a watched this for around 30min I kicked the host on storage site to get ride of the high latency. I repeated this test 2 times on different hosts with the same result.
I didnt find the time to create a seperate LUN for this test. Just did it with the a hole storage group with about 10 LUNs (but I dont see why this shouldnt work).
I am just implementating Veeam Backup & Replication Enterprise V6 and tested the migration fuction: First tests went very well. Veeams doing all the stuff I did manual (like wrote before): Copy VM, De- & register the VM between the vCenters, Power on VM etc.
Of course this wont work in the case that rucky descriped (one vSphere Cluster / Environment down), because Veeams communicating via the VMware layer and not on the storage layer.