VMware Cloud Community
khincung
Contributor
Contributor

Configure multiple initiator zoning to single initiator zoning

I have been implement a FC vmware infrastructure and it runs successfully. I'm currently using multiple initiator zoning where I group all my HBA initiator and storage controller into one huge zone. I been reading documents stating that the best practice for vmware is to configure single initiator zoning. Currently I had created some running VMs into the infrastructure.

My question here is, can I re-configure the zoning from multiple initiator zoning to single initiator zoning without destroying or delete the existing datastores which has already hosted some VMs. My goal is to re-configure the zoning to single-initiator zone without any changes on my current datastores which mean that when I successfully configured single-initiator zone, I can still see back my datastores and all my VMs in the datastore.

Reply
0 Kudos
7 Replies
kermic
Expert
Expert

Welcome to communities!

If the hosts are able to see the same LUNs and paths after zone re-config, then everything should be OK. They will see the same datastores and content.

If your FC switch does not allow you to have same WWNs in multiple zones at the same time (it should, but worth checking) then some storage access downtime is expected while you reconfigure zones.

I would definitely configure the zones for hosts in one-by-one manner:

1) place one host in maintenance mode (no VMs running), make note of NAA IDs and available paths for storage devices that are going to be affected by zone reconfig

2) optionally - on that particular host unmount the datastore(s) that will be affected by zone config. DON'T select Delete, as that will destroy volume metadata.

3) create single initiator zones for that host. Each zone will probably consist of HBA port WWN and Storage Controller Port WWN. Make sure you create a zone for each path (HBA port - array port pair)

4) create a zone group and include all single-initiator zones from the same host-array pair.

5) do a rescan on host and confirm that you see all devices with NAA IDs and all paths as recorded in step 1)

6) if you unmounted datastores in step 2), mount them back and confirm they're accessible / OK

7) exit maintenance mode

Then move on with next host. I'd probably do this during non-peak hours, just in case.

Zoning is done on FC switch level, you should not have a need to reconfigure anything on array side.

If you can, take a backup of your FC switch config before changing anything.

If you're not an expert on configuring FC switches, get one in.

hope this helps

WBR

Imants

khincung
Contributor
Contributor

Thanks kermit on the reply. Appreciate your help! I will try to re-configure single initiator zoning tomorrow.

I would probably prefer to unmount the datastore first, re-configure zoning and mount back the datastore without any fuss. But I might worry about the content of the datastore once I mount the datastore back. Will it cause any effect if the LUN presented to the ESXi host has a different path after i re-configure zoning? But how about the other server which I have yet to re-configure the zone. Would it cause any downtime on the datastore? or not able to see the datastore due to different path. Currently, I have 8 paths to each LUNs ( total 2 LUNs ). After single-initiator configuration, correct me if im wrong, I will have 4 paths on each LUNs, ( 2 HBA on each server and 2 targets on each nodes. I have total of 2 nodes and 2 esxi servers )

I'm still pretty new on shared storage and zoning. Still learning here.

Reply
0 Kudos
kermic
Expert
Expert

If you are not changing anything on the array and are not doing anything with VMFS partitions on LUNs you should be on safe side.

Even if host detects a permanent device loss and then at some point re-discovers the SCSI device, if there is a VMFS partition detected there, host will try to mount and use it without changing any metadata, unless you request to do so.

After zone change and re-scan the host should detect the paths and configure multipathing automatically.

Dangerous things that can cause data loss and should be avoided:

- deleting the datastore from host

- changing LUN properties on array or deleting the LUN on array

- changing LUN presentation on array

- changing datastore properties from host (i.e. resignaturing)

And another thought - if you're having just 2 hosts, multiple initiator zoning should not cause you too much trouble. Main reason why VMware recommends SIZoning is that the RSCN/LIP event (someone enters or leaves the fabric, i.e. host is being shut down or booted up) might temporarily freeze I/O for other initiators within the same zone. If you're not rebooting your hosts every day, that should rarely be the case even with multi initiator zoning. If you are going to have more hosts in nearest future, then building it up correctly from very beginning is not a bad idea.

I might not know all the details of your environment but from what you've shared the zone config change sounds fairly simple and safe. However storage operations are always risky because things can be messed up very quickly. If you're not a storage / ESXi / FC expert, check if you can get one to hold your hand or even do this for you. If you're having just 2 hosts then whole procedure should not take more than 20 minutes (including time you spend searching for your FC switch password Smiley Happy)

WBR

Imants

Reply
0 Kudos
khincung
Contributor
Contributor

I'm having a thought on this as well. I have a pretty small environment ( 2 hosts, 1 shared storage and 1 SAN switch, vSphere essential plus, support up to 3 hosts ).

I had seek advise from a storage consultang on this matter and they suggest to configure multiple-initiator zoning since I have only one SAN switch. I'm not sure which one to follow since vmware come out with a best practice of using SIZoning. What would you suggest on my environment? Probably, we might add another SAN switch in near future.

Reply
0 Kudos
kermic
Expert
Expert

It depends...

Theoretically the risk of having multi initiator zone related fabric problems in 3 host / 1 array environment is fairly small, so you might go with a concept of "dont fix it if it ain't broken". Just for an example - I do have 16 hosts here, multi initiator zoning. 14 of them are restarted regularly since this is a Lab environment. We have 2 FC switches and 2 FC arrays. I've never had any fabric issues because of this (however I can't promise that you'll never have).

If you're having some pressure from above or are just loosing sleep because you've not followed the best practices, then go ahead and reconfigure the zones. By this you'll minimize the risk of having IO freeze due to RSCN/LIP events but will increase complexity and management overhead a bit.

hope this helps

WBR

Imants

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso

I'm not sure which one to follow since vmware come out with a best practice of using SIZoning.

Single initiator / single target zoning is a recommended practice because it can scale much easier, but multi initiator zoning is also acceptable for arrays and fabrics that support it. As kermic mentioned, at your size it really should not be an issue for performance.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
khincung
Contributor
Contributor

Since multiple initiator zoning doesn't give much problem on my current environment, I would probably leave it this way since all the VMs all running at the moment. Would not want to spend much time on troubleshooting after re-configuration as things are working fine so far.

Many thanks for the help :smileylaugh:

Reply
0 Kudos