VMware Cloud Community
Wmerlin
Contributor
Contributor

unpresent LUNs for half of ESXis

Recently we had some storage issues that made us realize our Datacenter has grown too much without planning and we decided to restructure it dividing the Clusters in small ones and have every cluster with a small number of LUNs. So far we have divided a single cluster in 5 small ones. What we need now is to isolate the LUNs for each cluster as per now all Clusters see all LUNs. I didn't do any testing since we can't risk having any downtime in our environment.

I`ve been searching across the forums and I couldn't find an answer for this so I finally gave up and decided to start a discussion on this subject. I've found some KBs and articles explaining how to remove LUNs from hosts 4.x 5.x avoiding APD but none of them explain how to remove LUNs from some specific ESX while others still access to it.

Have anyone tried this before?

I`m running vcenter 4.1 with 4.1 ESXi hosts.

0 Kudos
7 Replies
Troy_Clavell
Immortal
Immortal

in vSphere5 you can do this by choosing "UnMount" and then remove zoning and presentation.  With that said, in vSphere4, the best thing to do is work with your Enterprise Storage group and removing zoning and presentation for specific Hosts that you do not want to see Storage.  I would also advise doing this while the host(s) is/are in maintenace mode.

0 Kudos
vMario156
Expert
Expert

Normally you can do this on your storage system. There you can create storage groups (the name is different from vendor to vendor) where you present a group of LUNs to a group of host.

In your case this means that you just need to split up your storage group into two.

Regards,

Mario

Blog: http://vKnowledge.net
0 Kudos
Wmerlin
Contributor
Contributor

I understand the group concept and in fact we have it implemented here. What I`m afraid of is if it is safe to remove the “tagging” in the storage while the ESX still have access to it. As example let’s suppose I have host ESXi1 and ESXi2. They have two LUNs visible to them: LUN1 and LUN2 as shown below.

ESXi1ESXi2
LUN1LUN1
LUN2LUN2

These LUNs have 2 Vms each. VM1, VM2, VM3 and VM4. I’ve placed VM1 and and VM2 in LUN1 and Vmotion them to ESXi1 and I`ve placed VM2 and VM3 in LUN2 and vmotion them to ESXi2.

I can now ask my storage team to remove LUN1 from ESXi2 and LUN2 from ESXi1. What I need to know is. Is that safe?

Wouldn’t both ESXi keep looking for these LUNs that are now “disconnected”and wouldn’t it cause an APD?

In case not that’s all I need to do, but I`m just afraid it could impact the production environment and that’s why I need to be sure.

0 Kudos
vMario156
Expert
Expert

You need to use svMotion (storage vMotion) between the LUNs. vMotion between the hosts isn´t changing the location of your data (that you want to do if I get you right).

So move all your VMs to other LUNs until your datastore is empty. Unmount it and adjust the zoning / storage groups to present this LUN to your new hosts.

Regards,

Mario

Blog: http://vKnowledge.net
0 Kudos
Wmerlin
Contributor
Contributor

Hi Mario, I`ve done the storage vmotion and them I`ve done the vmotion. I don't want to empty the LUNs I want to divide the farm. While LUN1 is seen by ESXi1 and 2 it actually just have inside machines from ESXi1. no new hosts will be added and no new lun will be presented, what I need is to divide 30 LUNs in 10 hosts without downtime and without having to empty any LUN as per now all hosts see all the 30 LUNs

0 Kudos
chriswahl
Virtuoso
Virtuoso

Removing the zoning for the LUNs you are trying to get rid of is the best answer.

For example: In a standard zoning configuration using single target / single initiator paths, there should be a zone rule created for each path to the storage. Have your storage team remove those zones and then reactivate the zoneset. Make sure you have the WWN information to correctly identify the zones you wish to remove.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Wmerlin
Contributor
Contributor

Hi Chris, I will give it a shot, Unfortunately it is hard to perform a a testlab since there are no extra resources to do so but I will try to isolate some hosts and see if I can do a test in a small scale before doing it to the whole environment.

0 Kudos