• Article for your preferred language does not exist. Below is international version of the article.

Article ID: 114327, created on Jul 10, 2012, last review on Nov 11, 2014

  • Applies to:
  • Operations Automation

The main things about Load Balancer (LB) server to mention:

  • LB provides High Availability, Dynamic Load Redistribution and Seamless Capacity Management

  • Linux Virtual Server (LVS) can be used as cheap solution instead of a hardware solution

  • Any load balancing devices that provide direct routing and sticky sessions are supported as well

Note: all the information provided below about load balancing is related to LVS technology.

The main components of the LVS-based Load Balancer are:

  • pulse

This is the controlling process that starts the other daemons as required. It is started on the LB server by the /etc/init.d/pulse script, normally at boot time. Through the pulse, which implements a simple heartbeat, the inactive LVS router determines the health of the active router and whether to initiate failover.

  • lvs

The lvs daemon runs on the LB server. It reads the configuration file and calls the ipvsadm tool to build and maintain the IPVS routing table.

  • nanny

The nanny monitoring daemon runs on the LB server. Through this daemon, the LB server determines the health of each webserver in the cluster and gets its workload. A separate process runs for each real server used by each virtual server.

  • lvs.cf

This is the LVS cluster configuration file. Directly or indirectly, all daemons get their configuration information from this file.

  • piranha

The tool for monitoring, configuring, and administering an LVS cluster. Normally this is the tool you will use to maintain the /etc/lvs.cf, restart running daemons, and monitor an LVS cluster.

  • ipvsadm

This tool updates the IPVS routing table in the kernel. The lvs daemon sets up and administers an LVS cluster by calling the ipvsadm to add, change, or delete entries in the IPVS routing table.

The Load Balancer server (LB) receives incoming HTTP/HTTPS requests from the internet and balances them across webservers in an NG cluster. LVS-based load balancing technology is described in more detail in this article: http://kb.linuxvirtualserver.org/wiki/Load_balancing.

The IP addresses of all websites hosted on webservers in an NG cluster are configured in two places:

  • As aliases of the network interface on the Load Balancer (this is done by the deploy_v4_ips/deploy_v6_ips functions in the /usr/local/pem/etc/web_cluster/lvs/propagation.py script during LB configuration):

    2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:1c:42:31:56:4d brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth1
    inet6 fe80::21c:42ff:fe31:564d/64 scope link
       valid_lft forever preferred_lft forever
  • As lo (loopback) interfaces on webservers ( is the only IP address used by websites in the NG cluster):

    # ip a ls lo
    2: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
    inet scope global lo

DNS A records of customer websites are set to public IP addresses configured on both the Load Balancer (physical network interface) and the web server (loopback interface).

When a request to a web site is made, the TCP packet first arrives to the Load Balancer.

The Load Balancer examines the packet's destination IP address and port. If they match the balanced service (Apache/FTP/SSH), a real server is chosen from the cluster by a scheduling algorithm, and the connection is added into the hash table which records connections.

Then, the Load Balancer directly forwards it to the chosen server. When the incoming packet belongs to the existing connection and the chosen server can be found in the hash table, the packet will be again directly routed to the server.

When the web server receives the forwarded packet, the server finds that the packet is for the address on its alias interface or for a local socket, so it processes the request and returns the result directly to the user.

After a connection terminates or timeouts, the connection record will be removed from the hash table.

The Load Balancer simply changes the MAC address of the data frame in the TCP packet to that of the chosen web server and retransmits it on the local area network. This scheme is called Layer-4 switching.

If LVS is used as a load balancing technology, use the ipvsadm utility to see the current load distribution rules on the LB server, web servers' weights and statistics:

# ipvsadm --list
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
FWM  100 lblc
->                Route   32     0          0
->                Route   32     0          0

# ipvsadm -L --stats
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port               Conns   InPkts  OutPkts  InBytes OutBytes
-> RemoteAddress:Port
FWM  100                             62306   825071        0 49897843        0
->                    4403   437201        0 23318470        0
->                       1        1        0       60        0

Load Balancer configuration is updated with the help of the lvsctl Python scripts. They are installed on the POA Management Node as part of the WebCluster Service Controller package:

  • /usr/local/pem/etc/web_cluster/lvsctl.py

  • /usr/local/pem/etc/web_cluster/lvs/*

The scripts can be customized by a Provider to support a custom Load Balancer device instead of Linux Virtual Server (customization details can be found in the POA Linux Shared Hosting NG Deployment Guide).

The lvsctl script is called by POA on every modification of the cluster configuration related to the Load Balancer, namely:

  • Adding/removing a web server to/from the cluster

  • Adding/removing an IP address to/from the cluster

The script can be executed manually on the POA Management Node to update the load balancer configuration to its current state:

# /usr/local/pem/etc/web_cluster/lvsctl.py update --cluster-id=<cluster id>

Every web server in the cluster has own weight. The greater this weight, the more requests it will receive for processing from the Load Balancer. In the default configuration, a web server's weight is stored in the apache_h2e_nodes table in the Parallels Operations Automation database. This weight is specified when adding a web server to the cluster via the Provider Control Panel.

The LVS configuration is stored in the /etc/sysconfig/ha/lvs.cf file on the Load Balancer server, as in the example below:

network = direct
primary =
service = lvs

virtual Web-Cluster-v4 {

    active = 1
    fwmark = 100
    load_monitor = /usr/sbin/h2e_get_cluster_load.sh
    port = 80
    protocol = tcp
    reentry = 10
    timeout = 10
    scheduler = lblc
    send = "GET / HTTP/1.0\r\n\r\n"
    expect = "HTTP"

    server {
            active = 1
            address =
            weight = 32

    server {
            active = 1
            address =
            weight = 32


From the example above, we can see the following information about the Load Balancer:

  • There are two servers under load balancing - and

  • The weight of both web servers is 32

  • The scheduling algorithm is lblc (see below)

  • Direct Routing is used to forward packets to balanced servers (LVS also supports two more methods - Network Address Translation (NAT), and IP-IP Encapsulation (tunnelling))

  • Packets marked with 100 are routed (the iptables table 'mangle' is used to mark network packets with the number 100, decimal value 0x64)

  • The /usr/sbin/h2e_get_cluster_load.sh script is used to get the current load of a web server in the cluster from the files in the /var/lib/h2e folder (they are updated by the /usr/sbin/h2e_dump_cluster_load.sh script through the cron job)

Parallels provides the custom h2e-piranha RPM package which is being installed on Load Balancer. It reconfigures LVS to balance requests according to the current load on web servers. The default weights are taken from the /etc/sysconfig/ha/lvs.cf file, however load may become different on web servers, so piranha collects load from webservers and reconfigures their weights according to the current load.

The current load of webservers is fetched by the /usr/sbin/h2e_dump_cluster_load.sh script, which is executed by crontab on the LB server every minute. The /usr/sbin/h2e_get_cluster_load.sh script checks webserver load, and is specified in the lvs.cf configuration file). Webserver load is fetched using the rrdtool command.

By default, the 'Locality-Based Least-Connection' (lblc) scheduling algorithm is used for load balancing, but this can be changed in the /etc/sysconfig/ha/lvs.cf configuration file. More details about scheduling algorithms may be found in this article: http://www.linuxvirtualserver.org/docs/scheduling.html.

NG hosting implements the sticky Web Site to Web Server affinity by means of the TCPHA pluggable Load Balancing module. Sticky Web Site to Web Server affinity is a technique that enables the load balancer to remember which web server was chosen for a certain client session when processing a previous request. Subsequent requests are then directed to the same web server. This approach adds the following advantages to the system:

  • Stickiness is necessary for the applications that need to preserve some state across distinct connections

  • Since all requests are mostly coming to the same web server machine, we achieve a high rate of a cache hits, so performance is increased

Using the LBLC scheduling algorithm, LB assigns jobs destined for the same IP address of a website to the same web server if the server is not overloaded and available. Otherwise, jobs are assigned to servers with fewer jobs, and kept for future assignment for a some period of time (timeout). The web server is treated as overloaded if its active connection number is larger than its weight. If the server is overloaded and there is a server in its half load, then LB will allocate the least weighted server to the request.

The 'pulse' service starts the 'lvsd' process which configures webservers' weights according to the configuration file, and the 'lvsd' in turn starts the 'nanny' processes which monitor load on the servers and reconfigure LVS weights. All the processes can be seen using ps axf command on the LB server, as in the example below:

 1462 ?        Ss     1:30 pulse
 1470 ?        Ss     0:34  \_ /usr/sbin/lvsd --nofork -c /etc/sysconfig/ha/lvs.cf
 1488 ?        Ss     3:46      \_ /usr/sbin/nanny -c -h --server-name -p 80 -r 80 -f 100 -s GET / HTTP/1.0\r\n\r\n -x HTTP -a 10 -I /sbin/ipvsadm -t 10 -w 32 -V -M g -U /usr/sbin/h2e_get_cluster_load.sh --lvs
 1489 ?        Ss     3:43      \_ /usr/sbin/nanny -c -h --server-name -p 80 -r 80 -f 100 -s GET / HTTP/1.0\r\n\r\n -x HTTP -a 10 -I /sbin/ipvsadm -t 10 -w 32 -V -M g -U /usr/sbin/h2e_get_cluster_load.sh --lvs

The 'nanny' processes on the NG LB server continiously monitor the availablity of the Apache server on the webservers in the cluster. If Apache stops responding you will see the corresponding message in the /var/log/messages file on the LB:

Jul 16 04:41:04 nglb nanny[1489]: running command "/usr/sbin/h2e_get_cluster_load.sh" ""
Jul 16 04:41:14 nglb nanny[1488]: [inactive] shutting down due to connection failure
Jul 16 04:41:14 nglb nanny[1488]: running command  "/sbin/ipvsadm" "-d" "-f" "100" "-r" ""
Jul 16 04:41:54 nglb nanny[1488]: [ active ] making available
Jul 16 04:41:54 nglb nanny[1488]: running command  "/sbin/ipvsadm" "-a" "-f" "100" "-r" "" "-g" "-w" "1"

In the example above, we can see that 'nanny' detected that Apache went down on the server and excluded it from the list of balanced servers using the 'ipvsadm' command. Later, when Apache started on the web server, 'nanny' detected it and added it back to the list of balanced servers.

The Nanny process also periodically checks the load on servers in the cluster to adjust load balancing:

Jul 16 04:46:24 nglb nanny[1489]: running command "/usr/sbin/h2e_get_cluster_load.sh" ""
Jul 16 04:46:24 nglb nanny[1488]: running command "/usr/sbin/h2e_get_cluster_load.sh" ""

The stock 'piranha' package has been modified by Parallels to add IPv6 support. No other changes had been made.

In order to verify which webserver processed a request to a website, check the HTTP(S) request headers. Headers of the HTTP(S) requests balanced between webservers have the 'X-SERVER' field with the ID of the webserver which processed the request. For example, if website.test.com is hosted on the NG cluster:

# curl -v website.test.com 2>&1 | grep X-SERVER
< X-SERVER: 75

We can see that the request was served by a web server with ID #75. This ID may be seen in the POA Provider Control Panel under NG web cluster properties.

More details about LVS may be found at:

Current load per webserver can be observed through /usr/sbin/h2e_dump_cluster_connections.sh

See the main Parallels Knowledgebase article #114326 Linux Shared Hosting NG: General Information, Best Practices and Troubleshooting for more information about NG Hosting in Parallels Automation.

Search Words

Numerical result out of range Error adding address

High load on NFS by NG

Load Balancer

load open stream error


NG Hosting

load average

installing webcluster_db failed

caea8340e2d186a540518d08602aa065 5356b422f65bdad1c3e9edca5d74a1ae e12cea1d47a3125d335d68e6d4e15e07

Email subscription for changes to this article
Save as PDF