This project is copyright (2010) by TD Meyer, and is available to you under the terms of the GNU Affero General Public License.
See http://www.gnu.org/licenses/agpl-3.0.html
This started out as a need for a quick and dirty "heartbeat" program. A basic use case would involve creating two identical machines (say, two webservers with the same applications/content) and placing them behind the a load balancer. The specific need was to manage an apache http server acting as a proxy to an apache tomcat application server, but the system was designed to work with any kind of service.
A note on terminology: To avoid ambiguity, I'm trying to be consistent:
In a situation where you have a reasonably effective load balancer, you can keep both web servers running, and the load balancer will manage state in such a way that associations created between a web user and a particular server are maintained between requests. The easist way to accomplish this is through a "round-robin" mechanism, which iterates through the list of available web servers (from 1..n) and dispatches requests equally between them. More sophisticated mechanisms take into account additional factors like server load and response time.
A stateful load balancer will keep connections persistent between a specific client and a specific server, which is helpful, because if I'm using server cookie session management, I may not want to log in again and again for each request.
Less capable load balancers (I won't mention any names) will simply check for an active service ("Is port 80 open on this server? Good.") and direct requests in a round-robin (or other) fashion, ignoring session state.
This is a Bad Thing for session management.
A typical deployment looks like this:
+---------------+ | Load Balancer | +---------------+ | -------------+------------ | | +----------+ +----------+ | Server 1 | | Server 2 | +----------+ +----------+
In this case, the "Load Balancer" functions less as a load balancer, and more as a 'failover detector.' Both servers are powered on, but only Server 1 is running the web service.
Le Vacataire should be configured to run on both servers. One (Server 1) is identified as the "Master," and the other (Server 2) is the "Standby." The software running on Server 1 will poll the desired web services periodically, and when a failure event (see below) is discovered, it will notify its peer (Server 2) that it's having trouble. Server 2 then starts up the failover/standby services, while Server 1 shuts its own down as a precaution.
Communication between the servers can take place over a public network (given the right ports are allowed through any firewalls) or through a private, nonroutable network or VPN (depending on your situation). One admin was able to deploy with Amazon AWS--using their load balancer in one location, and two web servers in different locations.
At this point, the load balancer magic takes place, and the load balancer notices Server 1 is offline (e.g. not responding on port 80, since we shut down all the web services), but Lo! It detects Server 2 is now active, and it re-routes requests to Server 2. In the next release, we'll add notification functionality so Server 1's outage can be signaled (email or SNMP) to a human operator for intervention.
Once Server 1 is manually recovered, Server 2 is asked to quiesce and the master/standby relationship is resumed. Automagically, at least in part, by the load balancer.
You can write plugin code to detect any kind of 'event' you want. I provided a few examples in the plugins/ directory