VMware Virtual Appliances Community
VMTN_Admin
Enthusiast
Enthusiast

Hercules Load Balancer Virtual Appliance

http://www.vmware.com/vmtn/appliances/directory/300

A tiny but mighty tcp load balancer

Reply
0 Kudos
186 Replies
Hoppa66
Contributor
Contributor

Prabhakar,

Thx for this wonderfull little piece of working stuff; have it working within 2 hours on ESX3, though needed some changes specific for the ESX3 environment ...

Perhaps you would like to take these changes in consideration for your next deployment of Hercules? I've posted my findings before this one ...

CU

Reply
0 Kudos
Ken_Cline
Champion
Champion

Hoppa66 --

I want to thank you for sharing your knowledge! These forums are a great place where people are able to get answers to some really strange questions - and it's all because of people like you. You've posted only four times, all in this thread, but the quality of information you've shared is very high.

Please stick around and continue to share...we need people like you to maintain the quality of these forums.

KLC

User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
prabhakar
Contributor
Contributor

Thank you very much for trying out hercules. Glad you have it working. I will certainly incorporate your changes into the next version of hercules.

thanks

Reply
0 Kudos
VMSysProg
Contributor
Contributor

Yet Another Wasted VM Applet

BAN BITTORRENT!!!!

I hope VMware drops all VM Applets that use torrent.

Reply
0 Kudos
pgomer
Contributor
Contributor

Has anyone had problems with the virtual machine not responding after a period of inactivity?

Reply
0 Kudos
pencer
Contributor
Contributor

Does anyone know if pen is sticky?

I'm going to use it to load balance between two Citrix Access Gateway servers but a requirement is that the load balancer is sticky?

Any help appreciated.

Reply
0 Kudos
gguntz
Contributor
Contributor

Is it recommended to installed vmware tools? I have attempted to install, but so far no luck. I am running this on ESX3.0.1. I added a cdrom via virtual center. I pointed it to the linux.iso inside the vmware tools directory. I created a /mnt/cdrom directory. I then attempted to mount the cdrom from the usual places and no joy. I checked in virtual center and apparently in ver 2 you lose the ability to specify whether the cdrom is SCSI or IDE. It is always ide from what I can tell. I then checked the /boot/config-2.6.17.7 and apparently unless I am misreading the file there is no configuration specified for IDE.

Do I need to even worry about vmware tools? What is the required method for installing it?

g

Message was edited by:

gguntz

Reply
0 Kudos
gguntz
Contributor
Contributor

The answer is yes and no. In most scenerios it will remain sticky, however if a user is logged into an application and the server that they are sent to goes down or becomes unavailable they will be sent to another server in the farm. This situation should be rare, but might occur. I would think the user would get an error and they would have to start a new session at that point. An issue is going to exist no matter if you use a purely sticky solution or this one. The only difference maybe in how your application handles being transfered to a different sever in the farm instead of the connection just going away like in a traditional sticky solution. I have include the exact language for your information below.

HTH

g

According to http://siag.nu/pen/

The load balancing algorithm keeps track of clients and will try to send them back to the server they visited the last time. The client table has a number of slots (default 2048, settable through command-line arguments). When the table is full, the least recently used one will be thrown out to make room for the new one.

This is superior to a simple round-robin algorithm, which sends a client that connects repeatedly to different servers. Doing so breaks applications that maintain state between connections in the server, including most modern web applications.

When pen detects that a server is unavailable, it scans for another starting with the server after the most recently used one. That way we get load balancing and "fair" failover for free.

Correctly configured, pen can ensure that a server farm is always available, even when individual servers are brought down for maintenance or reconfiguration. The final single point of failure, pen itself, can be eliminated by running pen on several servers, using vrrp to decide which is active.

Reply
0 Kudos
pencer
Contributor
Contributor

Thanks for that g,

We'll be testing it next week so I'll let people know how it works with Citrix.

Cheers,

Andy.

Reply
0 Kudos
pencer
Contributor
Contributor

I'm having the same problem as pgomer.

I have two physical servers, each with two virtual hercules, all listening on a single vrrpd ip.

If i start them up it all works fine. The LB's all listen on the vrrpd ip, respond and pass on to the next devices as it should.

If i leave the LB's on but idle, after a while they stop passing the traffic on.

If i try to access the vrrpd ip or the LB's ip direct i get a page cannot be displayed error.

If i restart any of them it works again fine?

Reply
0 Kudos
KevMc
Contributor
Contributor

Great piece of software, never really been a guru of nix based platforms, but seemed easy enough to set up. I currently have it set up with two NICs with an IP Bound to each NIC, I was just wondering if it was possible to have one NIC have multiple addresses that PEN can then load balance?

Also I haven't been able to see any thing, that would allow you to drain all connections to one of the balanced servers so you can take it down without affecting the end users?

Many Thanks for your HIA.

Reply
0 Kudos
Matt_Ghantous
Contributor
Contributor

gguntz,

I don't think there is support for cd rom in the kernel. You could mount the files you need on a different machine and copy them over with scp or put them on a webserver and use "wget".

However, vmware-tools will want to compile its modules with gcc which is also not in the Hercules image.

All,

My knowledge is patchy on the subject. I keep reading that the network module from vmware-tools gives "enhanced networking" but cannot find any decent documentation on what exactly the enhancement is. Are we going from 10Mbits to 1Gbit when the new module is installed? Is it worth it to go through all the trouble to get it installed ?

The Hercules NIC is performing wonderfully for my needs, but maybe if the enhancement is large some users will be interested.

Prabhakar,

Did you have any intentions of trying to compile vmware tools into the image?

I'm not sure if ESX uses different vmware tools then workstaion (I think so).

Also what would be the best way to go about that? Add cdrom support, install gcc and the kernel headers in the image and let users compile their own tools? I fear this may bloat the image. The other option would be to compile the vmware network module into the image with uClibc, which may or may not be an easy task and there are different versions for workstation and ESX.

Reply
0 Kudos
prabhakar
Contributor
Contributor

Matts right. To compare vmware-tools you do need the compiler and friends. You might also need other libraries/.so files for the actual compile. Also I have never understood exactly what installing vmware-tools will give us in this case. So it may not be worth the hassle.

On a related note, I would like to get all your opinions on something I have been mulling over. I am thinking very seriously about building a new load balancer appliance based on the regular glibc. The busybox/uclibc works mostly but is harder to get going and painful to update. I am thinking in terms of building a stripped down glibc based linux appliance along with a UI for setting it up and managing it, alerts, statistics, and reporting. This will of course not fit in a 3 MB version. I am thinking along the lines of about 20-25MB, and selling it for about $50-75 with a years worth of free upgrades and forum support. I would also like to create a vmware appliance and a usb key version of the load balancer.

This will enable me to focus and build an appliance that is really what you guys need. Does this make sense or am I missing something? Will anyone of you be interested in something like this? If you want to contact me offline, you are welcome to email me - prabhakar at chaganti.net.

thanks

Reply
0 Kudos
Sam999
Contributor
Contributor

Cool load balancer, how many concurrent connections and/or sessions can Hercules handled?

Reply
0 Kudos
Sean_Cottrell
Contributor
Contributor

OK. I am gonna play dumb here. Downloaded Hecules-SCSI-ESX3.zip. Extracted the files to a windows share. WinSCP'ed the files to a new folder on a LUN under vmfs/volumes. Now what?

I created a new VM via VC, using Ohter Linux for the OS,called LB. Then PuTTY'ed to the host and navigated to the vmfs/volumes and CP the Hercules file, overwriting the new VMs files. such as .vmx, .vmsd, vmdk.

And of course I tried other things as well.

I can't seem to get this VM to boot. If this is for ESX3 why is there no -flat.vmdk file?

Ok

So what is the proper steps to get Hercules VM into the ESX3 enviroment? Obviously, I am doing something completely wrong here.

Thanks for the help in advance

Reply
0 Kudos
Matt_Ghantous
Contributor
Contributor

Hi Sean,

You can see Hoppa66's post a couple pages back on how to do this.

Basically you need to "import" the vmdk file from workstation format to VMFS format so that ESX can use it.

vmkfstools -i /vmimages/Hercules.vmdk /vmfs/volumes/

After you do this, the easiest thing to do is create a new VM and "use existing virtual disk" that you just "imported".

(FYI: To reverse a VM from VMFS to workstation format you use vmkfstools -e, which stands for export)

Reply
0 Kudos
Sean_Cottrell
Contributor
Contributor

Thank you. I was able to ge the VM to boot after doing the Import.

One note, afte the import it converted the new -flat.vmdk to 100mb,

Thanks again.

Reply
0 Kudos
Sean_Cottrell
Contributor
Contributor

I preformed the steps outlined to set a static IP. Restart the network. Even rebooted the VM. I can not ping the VM. Any thoughts?

Tried to ping from the Hercules VM, recieved the following:

PING 172.18.18.1 (172.18.18.1): 56 data bytes

ping: sendto: Network is unreachable.

My config in interfaces looks like this:

iface eth0 inet static

address 172.18.18.228

netmask 255.255.255.0

gateway 172.18.18.1

Message was edited by:

Sean Cottrell

Reply
0 Kudos
Matt_Ghantous
Contributor
Contributor

Yes. VMFS expands the disk to it's full size (hence "flat" at the end of the name). This is mostly to ensure that a 100MB disk will actually be guaranteed the full 100MB.

Workstation is not generally considered production so it is ok to have the disk expand (and contract) as you need it, without the guarantee. You are able to shrink Workstation VM's with VMware tools but VMWare tools for ESX has this feature disabled.

Reply
0 Kudos
Matt_Ghantous
Contributor
Contributor

Make sure you have the correct gateway and netmask.

Did DHCP work? If it does, what does ifconfig say when you are connected with a DHCP address?

If this is a networking problem it may be in the scope of another forum.

Reply
0 Kudos