VMware Cloud Community
AndyMcM
Enthusiast
Enthusiast

RHEL 5 & GFS

Hi There,

I am trying to setup three RHEL Virtual Machines to access the same data using GFS and Clustering and running into a few problems.

Each machine can access the Volumes well and can mount the drives.

What my problem is with the Cluster part of it, I am not sure how or if I should have Fencing and Quorum setup.

Has anybody for this working? Could someone please help me out?

Am using the Conga framework to run it all as well.

Cheers

A.

0 Kudos
7 Replies
michaelrch
Contributor
Contributor

Hi

I have a customer who is looking to do this as well. He does not yet have Virtual Infrastructure in place and is concerned that using VMware he won't be able to do the linux clustering he wants to do. He is concerned about fencing out members of the cluster that have gone wrong...

Is this possible under VMware with the Standard VI set up on 2 hosts and Linux VMs residing on each host?

Any advice would be really appreciated.

Many thanks.

Michal

0 Kudos
kjb007
Immortal
Immortal

Yes, it is possible. I am using it right now. Ran into issues with multicast, but worked through those. The fencing agent is another matter. The current vmware fencing agent treats ESX hosts as stand-alone environments, and the virtual machines as static on those hosts. So, as long as you are not using DRS/HA, then you're fine. To setup GFS, you have to use physical mode RDM, so DRS won't be used anyway, but when VM's are powered on/off, they can move to other hosts.

If you are not using that, then you are fine, otherwise, you have to use a modified fencing agent that is vi aware, and can kill nodes through the vi api and vCenter instead of directly from the host.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Could you provide exact steps for setting up the virtual hardware and the cluster within the virtual hardware? Including the solution to your multicast problems? This question comes up quite a bit.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
kjb007
Immortal
Immortal

The steps for setting up the virtual machines is almost identical to the configuration of MSCS.

Create the virtual machines.

Create a passthrough RDM using vmkfstools, for the cluster nodes.

Once the RDM file is created, add it to the cluster node machines.

Verify LUN mapping from the virtual machines.

So, after the LUN mapping, get the cluster created, and get your cluster multicast IP.

The multicast is where things get hung up. In a Cisco environment, it was almost impossible to let Cisco devices figure things out on their own. So, create a static entry for each port on the switch that will be part of the multicast group. This method was easier than trying to setup a multicast router, and was more localized to the cluster at hand. The one thing to watch out, is that if you have switches in a redundant fashion, then you have to make sure to include the ports that connect the switches (whether they be trunk/ISL/port channeled). Otherwise, all cluster nodes will not be able to recognize each other, and you'll have mixed results when trying to query your cluster vote.

Once that issue is resolved, it's on to fencing. Fencing has been interesting as well. From what I've seen, the RHEL fencing agents for vmware, treat ESX as stand-alone entities, and hadn't yet been updated to talk to VI. This appears to be changing, and there are solutions on the net that allow for the vi perl toolkit to be utilized to "shoot vm's in the head" in case of errors. This is relatively new, and gets us away from using customized fencing agents, which, of course, are not officially supported by RedHat.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
kjb007
Immortal
Immortal

This doc was a great help in setting up the multicast. http://www.cisco.com/application/pdf/paws/68131/cat_multicast_prob.pdf

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
michaelrch
Contributor
Contributor

Hi there

Thanks for all your help.

One restriction I have noticed in some documentation is that using an RDM volume in physical mode will not work with HA. This could be a problem for me.

I have 2 ESX servers and 1 ESXi server. There are 3 VMs that need to see the shared disk with GFS on it, 2 web servers and a file management server. The file management server processes data (images mainly) and puts it on the storageso that the web serverscan then serve it up to web clients.

ESX1 gets "web1"

ESX2 gets "filemgmt"

ESXi gets "web2"

I don't mind if either of the web servers goes down and does not come back with HA, but there is only one file management server and it needs to auto-restart using HA if the ESX host it is on goesdown. It needs to be able to see the GFS volume from its original or alternate ESX server.

If mounting the GFS volume via RDM will tie filemgmt to one phyiscal server (which I need to avoid), could I consider running an iSCSI iniator within the VM to mount the GFS volume instead? If this isfeasible, are there any caveats I need to know about?

Many thanks for your help on this.

Michael

0 Kudos
JoJoGabor
Expert
Expert

Is there a security risk to this RHEL Fencing? I dont like the idea that a compromised VM now has access to vCenter, even if that vCenter user is locked down so it can only power control certain VMs.

0 Kudos