VMware Cloud Community
admin
Immortal
Immortal

Equallogic and ESX - Volume Management Nightmare!

4 equallogic members arrays in a group, 22 esx hosts, 41 esx san volumes.

The problem: its a nightmare in management. What we have been doing is adding the iscsi iniator name of each esx host to each esx volume on the SAN. We have been doing this to ensure that no other iscsi host will ever see the esx san volumes (excluding our backup servers of course). Im sure you can all imagine how much work is involved in adding an additional host and mapping it to all available volumes. Likewise when adding a new volume to the SAN and having to map each host to that volume. Equallogic really needs a copy function where you can copy the access rights of one volume to another and make modifications as necessary. Its like 12 clicks per volume to add a host multiplied by 22 hosts. And we are growing constantly.

We have tried using chap so we could use a single chap user per volume and that simplifies things considerably, note however you must use the iscsi discovery filter otherwise all other hosts that map to any other volume on the SAN can see the volumes. They cannot connect to them without the proper chap information obviously, but they still show as available targets - with 41 san volumes its a bit confusing to see all those targets listed and have to scroll through to find the correct target.

The downside to CHAP however is that if you need to pull a host off a specific LUN for any reason, you cant simply remove the CHAP credentials from the esx host otherwise you lose access to all LUNs. You would need to add multiple chap accounts on the SAN to map to each invididual volume. ESX does not appear to accept multiple chap accounts though.

Sooo....with that being said I am interested to hear how other organizations and companies are handling their volume management with their equallogic arrays. Are you mapping based on IP, IP wildcard 172.16.121.*, Iniator name, chap, etc? Do you have concerns about limiting specific LUN's to different hosts but allowing others, a case where CHAP or entire IP range assignment is not a feasible option? Do you just tolerate the pain in adding a new host to 15, 20, 50+ luns and consider it to come with the terroritory?

Please let me know in case there are some other ideas out there we have not thought about it.

PS, we have tried creating an IP range specific to our esx hosts for mapping vmfs volumes to the SAN but have not had great success in getting it to work any better than CHAP. Same limitations apply where you can not pull a single host from X number of LUN's if you ever had to for a particular reason.

0 Kudos
8 Replies
AndreTheGiant
Immortal
Immortal

Usually I use IP wildcard.

But I do not have a large configuration as yours.

In your case I think that a CLI procedure to build your Equallogic Volumes could be a good solution, so you can apply custom access rules.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
Johnnyk1
Contributor
Contributor

how do you feel about not using the complexity of CHAP in a closed environment where all the hosts share all the same LUNS and are on a seperate iSCSI VLAN?

0 Kudos
BenConrad
Expert
Expert

"The problem: its a nightmare in management. What we have been doing is adding the iscsi iniator name of each esx host to each esx volume on the SAN. We have been doing this to ensure that no other iscsi host will ever see the esx san volumes (excluding our backup servers of course). Im sure you can all imagine how much work is involved in adding an additional host and mapping it to all available volumes. Likewise when adding a new volume to the SAN and having to map each host to that volume. Equallogic really needs a copy function where you can copy the access rights of one volume to another and make modifications as necessary. Its like 12 clicks per volume to add a host multiplied by 22 hosts. And we are growing constantly."

This is a huge PITA but I think you are on the right track based on your other comments (CHAP, wildcard, etc), just need some automation and you can choose between IP and initiator name. I hide all volumes from a given host (Windows/ESX) unless they specifically need access to those volumes.

  • Adding a new volume and putting acls on these volumes. Maintain a list (or get one via PowerCli) of all the initiators. You can use ' show volume TEST-VOL1' to get an ugly output that includes the access lists (initiator or IP). Create a new volume and use the CLI to create the access lists. You can only have 16 IP or 16 initiator entries per volume so that is a limitation of how many hosts (8 hosts with dual-hba) can access a volume.

  • As for adding a new host to existing volumes, you will need to maintain a list (or get one via PowerCli) of targets for a given group of hosts and then using the CLI add ACLs to those volumes.

  • We have groups of ESX servers that mount similar volumes, the size of the group is limited to the number of ACLs we can have on a given volume.

Equallogic will have a more 'object oriented' approach to these volume operations in the near future (probably next few months).... I've said too much.. Smiley Happy

Ben

0 Kudos
sbarnhart
Enthusiast
Enthusiast

I hate to say it, but it sounds like you need to re-think your ESX & volume structure.

22 hosts and 41 volumes? There has to be a way to simplify that.

I know that Equallogic preaches "simple" VMFS (boot only, no data) with data volumes directly on the SAN (for Windows iSCSI mount or VMWare raw disk). They do this because many of the EQL features (snapshots, and to some extent replication) make little sense if the volume is abstracted a second time via VMFS.

IMHO, this leads to a lot of fragmentation on the SAN and a substantial volume management headache, not to mention lost space. Very large server volumes (eg, data warehouse, massive fileshare dump, etc) make sense as native SAN volumes but it usually seems simpler from a management perspective to keep fewer, large VMFS volumes with data/boot stored together.

With fewer, larger VMFS volumes and few-to-none native data SAN volumes you might be able to consolidate your hosts into clusters and limit access to a group of VMFS volumes per cluster.

0 Kudos
BenConrad
Expert
Expert

It depends on the size of the environment and the size of the disks. It's easy to put 50GB VMDKs inside 2TB VMFS volumes. it's not easy to start fitting lots of 600GB VMDKs into 2TB volumes. You will end up with VMFS filesystems that have large chunks of free space where via RDM you have lots of control over the size of the disks at the expense of more targets to manage. Also sVmotion with a 600GB file is a dog. It's way easier to online migrate a RDM from Pool-a to Pool-B in Equallogic when it's a RDM.

I've got several clusters with about 30 targets per HBA, combo of RDM and VMFS. I have to balance this against the Qlogic limit of 64 targets per HBA and the max number of iSCSI sessions in an Equallogic pool (latest firmware allows 512/pool).

Ben

0 Kudos
sbarnhart
Enthusiast
Enthusiast

Why limit yourself to a 2TB VMFS volume? Why not use extents and create a larger VMFS volume?

0 Kudos
BenConrad
Expert
Expert

Good question, for years extents were considered second class citizens but that is no longer the case in vSphere 4.x, that's good news. I've been considering creating 4-8TB volumes but due to lack of space and the need to move (would be sVmotion if VMDK) certain large disks between different spindle speeds (Equallogic pools) a few times a year it's not been a good choice for us.

Ben

0 Kudos
admin
Immortal
Immortal

Thanks for the suggestions around PowerCLI. I had never thought about that and it may just work to simply matters.

To answer one of the questions on volumes vs. hosts. We are running around 800 vm's in this environment, including pursuing a stronger presense with VDI. We have a mix of 300gb and 500gb volumes. We have had to add more to prevent scsi reservation wait times on the volumes. This allows us to average between 15-30 vm's per volume depending on disk IO profile and space. We avoid using RDM instead using in-guest iscsi connections. This provides us the flexibility of migrating the volumes from pool to pool SAN side but allows the flexibility to attach to or from physical servers should the need arise. Some of this also stems from the fact that we have migrated a number of our older SQL and file servers to VM's and migrated the data to iSCSI on the physical hosts prior to converting.

0 Kudos