VMware Cloud Community
perceptiv
Contributor
Contributor

How to manage storage spread over multiple floors

Good morning,

I have a challenge that don't have an out-of-the-box solution for.

My customer is moving to vSphere 6.0 (going to 6.5) and are rapidly converting their legacy Unix/Solaris/What-have-you applications x64 and deploying them on VMware.

in RHEL boxes.

Situation:

  • Before virtualization they designed their application running on Solaris/HP-UX/what-have-you to store the same data on two computer floors in the same building using LVM-like technologies.
  • In their migration path they have adopted that same strategy for the new guests using RedHat's LVM by giving each guest a multiple of two VMDKs; the odd disk on floor 1, the even disk on floor 2.
  • The guest OS uses the LVM to mirror the disks on floor 1 to floor 2.
  • All ESXi hosts have a datastore from each floor. let's say DS1-FL1, DS1-FL2
  • All ESXi hosts are in a cluster with HA enabled.
  • Some ESXi hosts run on floor 1 and others runs on floor 2, but all talk to the storage on both floors.
  • All hosts are in one HA cluster.
  • There is enterprise class storage (HP P9500) but no CA licenses, and no software-defined LUNs -> This will also not be bought as the storage will be replaced.
  • vSAN is not an option; management does not trust the technology (their loss)

Now,

  • If a host on floor 1 would fail, the guests would restart on a host on floor 2 without issue
  • If an entire storage box fails, the LVM will switch over to the other vDisk after about 120 seconds (at least on HP-UX and Solaris, not tested yet on RedHat)
  • If an entire floor goes down, HA will attempt to restart the guests, but will fail because the VMDKs on the offline floor cannot be found.

Sadly, I have no say in the storage product that will replace the current storage. For that reason I'm looking for a software solution that we can use, regardless of the chosen storage product.

Any suggestions? Let me know if any of you out there need more information.

Reply
0 Kudos
0 Replies