VMware Cloud Community
SRT65
Enthusiast
Enthusiast
Jump to solution

iSCSI sharing between two separately managed clusters

Hi,

Just after a bit of advise regarding sharing iSCSI LUN's between ESXi clusters (separate vCenters). This is a very basic overview of my setup:

+--------------------------------------+       +--------------------------------------+

| VCENTER1                             |       | VCENTER2                             |

| +----------+ +---------+ +---------+ |       | +----------+ +---------+ +---------+ |

| | ESXI1    | | ESXI2   | | ESXI3   | |       | | ESXI4    | | ESXI5   | | ESXI6   | |

| +-----|----+ +----|----+ +----|----+ |       | +-----|----+ +----|----+ +----|----+ |

+-------|-----------|-----------|------+       +-------|-----------|-----------|------+

        |           |           |                      |           |           |

         |-----------+-----------+----------------------+-----------+-----------+

        |

+-------|------------------------------+

| QNAP ISCSI target                    |

| +----------+ +---------+ +---------+ |

| | LUN_0    | | LUN_1   | | LUN_2   | |

| | VMFS1    | | VMFS2   | | VMFS3   | |

| +----------+ +---------+ +---------+ |

+--------------------------------------+

My intentions are for hosts ESXI1/2/3 (i.e. VCENTER1 cluster) to share LUN_0 and for hosts ESXI4/5/6 (i.e. VCENTER2 cluster) to share LUN_1 & LUN_2

My concern is all 6 hosts automatically attach all 3 LUNS (and mount associated VMFS volumes) so that the 3 VMFS volumes appear in both vCenter server inventories. This is not a problem for me in itself as none of the VM's on vCenter1 will ever use the datastores VMFS2 or VMFS3 and none of the VM's on vCenter2 will use the datastore VMFS1.

However I am concerned that there may be performance, file locking or possible data corruption issues by even allowing two separately managed clusters to have 'access' to the same LUN's/datastores concurrently regardless of being 'used' or not.

I have tried detaching the disk devices on the relevant hosts. e.g detach the device associated with LUN_0 on ESXi4/5/6, while this looks like it stops the hosts from accessing LUN_0 it does permanently flag the VMFS2 and VMFS3 datastores on vCenter2 in an error state as "(inaccessable)" which is not desirable. Also not sure if the detach is persistent across host reboots or may unintentionally get reattached.

So is it safe to just allow all 6 hosts to have all 3 datastores mounted? and could there be possible locking or performance issues resulting from this?

Cheers,

Steve.

1 Solution

Accepted Solutions
a_p_
Leadership
Leadership
Jump to solution

I do not know how exactly QNAP works, but usually you have the same discovery addresses for a storage system.

The way to present the different LUNs to the ESXi hosts is done by defining initiators (ESXi hosts) on the storage by either their IP addresses or their iSCSI IQNs. Each LUN will then be presented only those initiators which are supposed to see the LUN.

André

View solution in original post

Reply
0 Kudos
5 Replies
a_p_
Leadership
Leadership
Jump to solution

Although I don't expect any technical issues with presenting all LUNs to all hosts, you may want to re-configure the storage system, and present the different LUNs to only those hosts, which are supposed to see them.


André

Reply
0 Kudos
Lalegre
Virtuoso
Virtuoso
Jump to solution

This is happening because of how you configured your Storage System. You probably enabled all the initiators or the whole ESXi iSCSI IPs to access all the LUNs.

So go ahead and do that change but first Unmount the VMFS from the ESXis because they are really sensitive when they the lose all the paths to an storage and sometimes the whole system got affected.

Reply
0 Kudos
SRT65
Enthusiast
Enthusiast
Jump to solution

Hi André

Thankyou for your quick reply.

I did look at using two iSCSI targets on the QNAP and mapping the LUN's as appropriate. From what I could work out though this means that I have to use multiple static iSCSI discovery entries and specify the target and IP path to be used on each host rather than using dynamic discovery.

I ran in to an issue early on where by hosts would quite often freeze for about 15 mins or so towards the end of the ESXi bootup process on "Loading esxadapter..." (or something like that) and then continue as normal with no apparent problems observed (other than taking 20 mins to reboot a host). At the time I stumbled across a post somewhere that suggested it could be iSCSI issue. I switched back to dynamic iSCSI discovery and have not seen the issue again since. I Have not tinkered with discovery settings again since to confirm this was the issue as don't want to break it again.

I might see if I can try static discovery again on a test host to see if it is causing my long boot issue.

For time being though I think I will just leave production system as is if its safe to do so until I can confirm for sure that static discovery is reliable in our environment.

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

I do not know how exactly QNAP works, but usually you have the same discovery addresses for a storage system.

The way to present the different LUNs to the ESXi hosts is done by defining initiators (ESXi hosts) on the storage by either their IP addresses or their iSCSI IQNs. Each LUN will then be presented only those initiators which are supposed to see the LUN.

André

Reply
0 Kudos
SRT65
Enthusiast
Enthusiast
Jump to solution

Thanks André

Just in case anyone else is looking for the solution on a QNAP device.

Initially I looked at having multiple targets and allowing connections to specified IQN's and mapping LUN's to targets. This seemed to work however there was a big problem in that on the QNAP you cannot modify the allowed IQN's list on a target while there are any devices connected to that target which is a bit unpractical in a production environment.

Following on from your suggestion I did a bit of more digging around I found the required function under "iSCSI ACL" on the QNAP device, this allows you to set RO, RW or Deny permissions to individual IQN's on a per LUN basis. This seems to hide the "denied" LUNS from ESXi hosts and can be modified while ESXi host is connected to target.