VMware Virtual Appliances Community
VMTN_Admin
Enthusiast
Enthusiast

Hercules Load Balancer Virtual Appliance

http://www.vmware.com/vmtn/appliances/directory/300

A tiny but mighty tcp load balancer

0 Kudos
186 Replies
aneilsingh
Contributor
Contributor

Hi, looking for some help here please.

  • Running a single HerculesLB server.

    • IP's would be static

  • Will HerculesLB terminate SSL, can i install certs on the Hercules and then hercules will communicate with the web servers via http/80?

    • Can I install multiple SSL certs terminating for different domains/server farms

  • How can I have multiple IP's listening on the hercules servering different backend webservers? (config example needed please)

    • LBserver1 resolves for Serverweb1, Serverweb2

    • LBserver2 resolves for Serverweb11, Serverweb12

    • LBserver2:443 resolves for Serverweb11, Serverweb12

Thanks

0 Kudos
legacyb4
Contributor
Contributor

Thanks to the various posts on this thread, I've managed to get a pair of Hercules appliances up and running in the following configuration:

  • two ESXi hosts, each with a Hercules appliance and an Ubuntu guest running squid proxy and outbound Postfix

  • each of the Hercules guests are configured to load balance between the local Ubuntu instance and the Ubuntu instance on the other ESXi server

By themselves, the Hercules appliances are functioning as expected; the odd behavior now comes when I enable VRRPD so add redundancy between the load balancers themselves. My understanding is that once you've assigned a virtual IP to VRRP, the pair of VRRP servers is supposed to establish a master/backup configuration. However, once I've started the service, the Hercules appliances hang up for about 15-30 seconds, return to service, but now /var/log/messages show that both servers claim to be the master router.

Any suggestions on why this might be?

My (almost) working config:

#!/bin/sh

LOGFILE1=/var/log/pen.smtp.log

LOGFILE2=/var/log/pen.squid.log

PIDFILE1=/var/run/pen.smtp.pid

PIDFILE2=/var/run/pen.squid.pid

CONTROLPORT1=8888

CONTROLPORT2=8889

CHROOTDIR=/chroot/pen

VRRP_IP=10.1.1.100

VSERVER_ID=1

LBSERVER1=25

LBSERVER2=3128

SERVER1=10.1.1.72

SERVER2=10.1.1.92

case "$1" in

start)

echo -n "Starting VRRP Cluster Service: "

/sbin/vrrpd -i eth0 -v $VSERVER_ID $VRRP_IP

echo "OK"

if -x /bin/pen ; then

echo -n "Starting pen.smtp: "

/bin/pen -C $CONTROLPORT1 -X -l $LOGFILE1 -p $PIDFILE1 $LBSERVER1 $SERVER1 $SERVER2

echo "OK"

fi

if -x /bin/pen ; then

echo -n "Starting pen.squid: "

/bin/pen -C $CONTROLPORT2 -X -l $LOGFILE2 -p $PIDFILE2 $LBSERVER2 $SERVER1 $SERVER2

echo "OK"

fi

;;

stop)

kill `cat /var/run/pen.smtp.pid`

kill `cat /var/run/pen.squid.pid`

killall vrrpd

;;

*)

echo "usage: $0 { start | stop }" >&2

exit 1

;;

esac

0 Kudos
montyshaw
Contributor
Contributor

Jeez people go read up on Pen;

The default concurrent users is 2048, but you can change it to what ever you want.

try this: pen --help

I don't think anyone is still supporting this. Too bad as this is a great appliance.

]Monty[

0 Kudos
lookandfeel
VMware Employee
VMware Employee

Hi,

I am searching for an VMware ESX4 ready Hercules Appliance.

Can anybody help me?

Kind regards,

Andre

0 Kudos
Rocl_LI
Contributor
Contributor

Very nice and simple VM. tks to prabhakar, (if you still read this.)

I download exs3 version and use vmware converter to bring into esxi4 host.

Adapted startup script and boot mod from Hoppa66, two Hercules VMs with VRRP up and running in two hours.

0 Kudos
hicksj
Virtuoso
Virtuoso

@legacy, re: vrrpd

Curious, did you ever figure out a solution to the VRRP issue?  In my testing, as long as BOTH are set to the same priority, you won't have a split brain problem when you REBOOT either vm.  However, in a scenario where network connectivity between hosts is interrupted, the vrrp service on the standby goes active (as expected) but vrrp remains split brain after network services are restored.

What I've found is that VRRP resets eth0's MAC address (according to ifconfig) when becoming the active system.  I was expecting that it would retain its guest MAC, plus anwser for requests to the virtual MAC (vrrpd), not reset its interface.  I have a feeling that this is not playing nicely with the underlying vSwitch.  Will test some more theories and possibly open case w/ VMware.

@others, Re: support for vm

Definitely no longer supported... no idea where the creator disappeared to.  However, this is a simple system to create on your own.  I have documented the process, using Ubuntu8.04 JeOS.  However, as part of troubleshooting the above, I'm currently updating to an Ubuntu 8.10LTS minimal virutual build.  Once I address the above problem, I would be happy to share with the community.  Its not nearly as small as the dropbox implementation the original author provided, but I'm finding that having tools like tcpdump built into the appliance is quite handy.

0 Kudos
hicksj
Virtuoso
Virtuoso

Good news.  I've found that using keepalived instead of vrrpd seems to provide a much more robust solution.

By setting both keepalived instances as BACKUP, with one Priority 100, the other 90, and the "nopreempt" option:

- Split-brain recovers, forcing an election, which the balancer vm with pri 100 wins.  (tested by blocking multicast with iptables on one load balancer to simulate switch failure, isolating the current master.  Both load balancers are now master.  Flushing the iptables rules, they both immediately see eachother and an election occurs.  This is where vrrpd failed.  For some reason, vrrpd would not see each other's multicasts and thus no election would happen.)

- If the current master (prio 100) fails, the other (prio 90) becomes Master.  Once the failed load balancer comes back online, it will remain in Backup state.  This is important (to me) as I don't want clients failing over twice should the master fail.  The application behind the loadbalancer is stateful, and the clients switching twice causes two outages.  One is bad, two is worse Smiley Wink

0 Kudos