Insights into HAProxy Load-Balancing on OpenStack Private Clouds

In this two-part series, we discuss the economic and technical benefits of HAProxy, an open-source load balancing solution. This post will identify the benefits and help you understand how we implement HAProxy with OpenStack. In part two, we’ll cover HAProxy’s time to market and cost benefits.

In the world of private cloud, OpenStack leads the way as the number one open-source technology, with companies from small businesses to large enterprises reaping the benefits. In the world of load-balancing, it is widely acknowledged that you can’t go wrong with HAProxy. Used by companies such as Twitter, Airbnb, Reddit and Instagram, HAProxy was an easy choice for Rackspace to make.

Key benefits of HAProxy

  • Reliability at scale – HAProxy has a proven history of scaling to handle extremely high traffic volumes, while remaining reliable through its HA configuration.
  • Stability – Rackspace has been using HAProxy for several years, and its proven stability makes it valuable for private clouds. With so many well-known companies deploying HAProxy at scale and leveraging the robust set of features it offers, it comes as no surprise that it is packaged with most Linux distributions available today.
  • Community backed, open source – backed by an ever-growing community of contributors, not to mention support from HAProxy itself.

How are we using it?

As the co-founders of OpenStack, Rackspace is committed to open source technology. We recognize the need to have more flexibility and choice when it comes to load-balancing the control plane within OpenStack. As a result, we can now deploy, manage and fully support a redundant OpenStack control plane, with HAProxy serving as the load-balancer.

Rackspace has proven the reliability and stability of HAProxy on both the Rackspace Private Cloud powered by OpenStack and Rackspace Private Cloud powered by Red Hat, meaning your experience will be seamless no matter which cloud is the best fit for you.

Tuning options for HAProxy

We have spent years configuring, deploying and operating HAProxy. The knowledge we’ve gained from doing this has given us a deep understanding of the best methods to tune HAProxy for a production ready, enterprise grade private cloud environment.

In order to get the best out of HAProxy, there are several tuning options that can be performed using the haproxy.cfg file. This config file contains all global settings for the HAProxy configuration, and as such, it is extremely important we configure this correctly. Here are some notable ones to consider:

High availability – By enabling keep_alived, we can configure multiple HAProxy nodes in HA mode, essential for any production environment. This ensures that HAProxy will be capable of failing over to another node in the control plane when required, while Rackspace will be on hand to resolve any related issues.

Max connections – Defining the max_conn parameter, we can set the maximum number of concurrent connections a node will accept. This can have a big impact on the throughput of a single node, so having the right expertise is important here to get this setting configured appropriately for your cloud.

Multi-threading – By default, HAProxy uses a single worker process. This means all of your HTTP sessions will be load balanced by a single process. In a server containing multiple CPUs, we can use the nbproc and cpu-map settings to allow more than one CPU to handle sessions. This can vastly improve performance and the number of connections that can be handled concurrently.

HPProxy, global

In the example above, we have set the max number of connections to 6,000 along with defining two CPUs within our HAProxy node. The cpu-map settings mean we can distribute processes between each CPU evenly, ensuring we make full use of all our cores.

Algorithm – We have numerous choices here: round-robin, weighted, weighted round-robin, least-connections, source. In a multi-node setup, selecting the correct distribution method for your connections is important in preventing a single node from being overloaded with requests. This will allow you to distribute traffic far more efficiently, while maintaining node health over time.

We’ve tuned our load-balancer, how do we monitor it?

As we know, monitoring allows you to see whether your cloud environment is operating correctly, and HAProxy is no different.

There are a number of methods that can be used to monitor HAProxy, which are specifically built into its configuration, but not enabled by default. These range from using the HTTP based stats page, right through to third party tools such as HAtop.

One we prefer to implement is the Unix Socket monitor. Through use of the Unix Socket monitor, we can view a wide range of statistics relating to our HAProxy nodes such as:

  • Requests – Number of queued requests qcur, requests denied dreq or even requests that errored econ.
  • HTTP responses – To determine good hrps_2xx and bad hrsp_4xx / 5xx requests with subsequent errors.
  • Sessions – The current number of active sessions scur, the max number of sessions smax, and even the session limit slim

Using the Unix Socket monitor enables us to securely automate, configure and monitor each HAProxy node in a given private cloud environment.

Choosing between a hardware load-balancer and HAProxy

Though Rackspace has proven the value of HAProxy, the world of load-balancing has many choices. These are driven by your business requirements, investments in technology, even the skill sets of your IT administrators. Choosing between these options can be a challenging decision to make, so rather than do a side by side comparison of each and every feature, here is food for thought:

  • The financial implications vs the solution benefit – Does the environment justify a high initial up-front cost, or does an open-source, lower cost model work best? Do you have a good relationship with an existing vendor that helps mitigate this?
  • Must-haves vs not-so-important – If you need specific features such as Layer 4 and/or 7 load balancing or SSL termination, then question which has these features and how they are implemented. Evaluate if you need control plane only load-balancing or also need data plane load-balancing for internet facing VM’s.
  • Consider the projected scale – How quickly will you scale your cloud and how will this be done (adding more nodes, more services etc.)?

We’re here to help you navigate these choices and find the optimal fit for your requirements.

How’s the support?

Pretty great, actually, since we do it for you! As mentioned earlier, Rackspace fully supports HAProxy in the control plane. That means we will monitor, configure and troubleshoot control planes deployed with HAProxy, in accordance with our reference architecture. This can be in a Rackspace data center, customer datacenter or third party data center.

If you are looking to deploy a private cloud and need guidance on the best practices for implementing it successfully, or even specifics around load-balancing with HAProxy, talk to us, we’ll be happy to help! Rackspace is the leading OpenStack service provider, with hundreds of clouds under management.

LEAVE A REPLY

Please enter your comment!
Please enter your name here