<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=1005900&amp;fmt=gif">

Insights

Thought of the Week: Top 5 Load Balancer Issues

Load Balancing is way of distributing network traffic evenly across different resources, so as not to load any resource more than its capacity helping in better performance of your application/system.  

 

When you see performance degradation, one of the first few things you might check is CPU Utilization and if you see that CPU Utilization is high, panic may set in, and organisations may be wondering if they require more servers? They may worry about running out of capacity.

Quite often, we have seen the same scenarios with our clients, but higher CPU utilization does not always mean more servers. One thing to look out for when you see the higher utilization is how well-balanced your load is across all the servers and that is where the load balancer plays a key role. If the load balancer is not configured correctly, to distribute traffic evenly, it can lead to more load on just one server which can often cause performance degradation which is often mistaken as ‘out of capacity.’

In my experience, these are the top 5 load balancer issues:

  1. Equals: Where you see the same number of servers with the same specification and the expected traffic distribution is to be 50-50. Here, if one of the sites receives more traffic than the other, it could lead to higher utilization on one site causing performance degradation and under-utilization of resources since the other sites are receiving a lot less traffic than they should be.

  2. Server Mismatch: Where the number of servers on each site is different but the server specifications are the same, for example, if the distribution is 70-30: off the total servers 70% of the servers are at Site 1 and 30% on Site 2.

    In this case load balancer should be set up in a way that Site 1 received 70% of the traffic and Site 2 receives 30% of the traffic. Any deviation from these could lead to a higher utilization on Site 2 with lower capacity leading to performance degradation and underutilization of Site 1 resources.

  3. Spec Mismatch: Sometimes you see the load balancer working correctly, evenly distributing traffic as per the server distribution on each site, and it looks like everything is working correctly - but you still see the performance issues. What could be the issue?

    This is a perfect example of when you should be checking the server specifications. Some servers are designed to handle more loads than others and that is why even though everything looks balanced, the specification of the servers at sites/within the sites are a mismatch and they can only handle more/less capacity than the others leading to these kinds of issues.

  4. Network: In a perfect scenario considering the right sever distribution and specification, the load balancer is designed to route traffic correctly based on this configuration and specifications. But you still encounter performance issues.

    It is then you should check outside, i.e. the network connectivity. When the network on the 2 sites is configured differently to support different amounts of traffic and when you send the normal traffic as per the load balancer distribution, it cannot handle the traffic as it is more than its capacity.

  5. Load Balancer: Now, here is the ‘Man’ itself…causing issues. When everything seems perfect, the right server distribution, specification and network are configured fine - what else can cause these issues? The load balancer is designed to distribute/split traffic.

    What happens if the combined traffic it receives is more than what it has been designed to handle? This is when you should be checking your Load Balancer configuration.

 

It is usually the servers that take up all the spotlight when it comes to performance issues. Most of the cloud providers these days have built-in load balancing that helps distribute traffic evenly and automatically. Often being considered less important, these load balancers play a vital role which ensure the 7 Pillars of Software Performance are rightly managed:

  1. Throughput & Response Time – Ensure throughput is evenly balanced maintaining the right response times for your system.
  2. Capacity – Prevent higher CPU utilization by distributing traffic evenly.
  3. Efficiency – Makes systems more efficient to respond to requests.
  4. Scalability – With the right balance, scalability can be achieved without adding more resources.
  5. Stability – Helps to enhance system stability.
  6. Instrumentation – The right tool can help demonstrate how well configured a load balancer is to evenly distribute traffic.
  7. Resilience – With a perfect configuration, the system can handle peak traffic without any issues.

 

And as they say, “do not put all your eggs in one basket,” - therefore do not let your traffic be concentrated on just a few servers. It will help your system go a long way!

We have helped many of our clients identify if the performance degradation issues they are facing could be because their resources are not being utilized efficiently. We help them achieve better performance with existing resources and we can do the same for you If you would like further information on any of the above, or speak to one of our consultants about the 7 Pillars of Software Performance, please reach out via contact@capacitas.co.uk

 

Speak to a Performance Engineering Expert


About the Author

Prayukti Shankar

Prayukti Shankar is a Principal Consultant specialising in Performance Engineering strategy, working with our high-profile managed service clients. Prayukti leads the projects and ensures they are delivered on time and within budget.

Also worth having a look at some of our recent case studies where we have saved our clients Millions of pounds in cloud spend.

Cegid and Capacitas case study   New call-to-action

  • There are no suggestions because the search field is empty.
Filter by Tags:
SRE
AWS
cto
ITV
TSD
cfo
cio