<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=1005900&amp;fmt=gif">

Insights

10 Mistakes Organisations Make When Capacity Planning

Capacity planning can be problematic for organisations who don't have experienced capacity planners. From our experience, these are the main capacity planning mistakes that organisations make. 

10 Mistakes Organisations Make When Capacity Planning

  1. Ordering hardware without understanding the existing utilisation
  2. Treating all CPU utilisation as the same.

    There are different types of drivers of CPU utilisation, e.g. 'hum', iowait, OLTP and Batch. I have seen people assuming the business is going to see a doubling of demand and assuming the CPU utilisation will also double. In this calculation they may include 'hum' and batch CPU utilisation.

  3. Including iowait in CPU utilisation when it is usable CPU capacity and excluding it when it isn't.
  4. Not understanding the workload characteristics before ordering new hardware.

    Sounds unlikely, however this does happen. I worked on an engagement some years ago where the organisation purchased brand new Blade servers for their application only to realise that their application was disk I/O intensive. The new Blade hardware didn't provide any increase in the disk I/O throughput capability.

    Download our Introduction to Capacity and Performance Management here and  discover how it supports business and revenue growth

  5. Forgetting about Moore's law when doing a hardware refresh.

    This is also related to the previous mistake. I have seen organisations doing a count of the number of servers for CPU intensive applications and then ordering the same number of new servers even when the new servers have a fourfold increase in CPU processing power.

  6. Looking at the busiest core rather than the average utilisation when setting a threshold for capacity utilisation.

    It’s quite common for many servers to have a core imbalances, e.g. linux, sql-server. In the case of Linux severs this can be because the interrupts are done on the core which is closest to the cache for that particular driver. As the server gets busier the load will be distributed over the remaining cores.

  7. Trending on short period.

    I have witnessed organisations who have tried to trend data on only several months' worth of data. In one case, the service was very seasonal. The volumes tended to drop over the Summer period before climbing back up in the Autumn.

  8. Using percentiles on summarised data.

    You can’t apply a 90 percentile value to averaged hourly service or component data. You’ll miss out the busy hour traffic for the week.

  9. Trending resource consumption for batch workloads.

    Batch workloads don’t increase in CPU utilisation over time – in general they just take longer as the transaction volumes increase.

  10. Treating all disk I/O as the same.

    There are different types of disk I/O operations. There is read – random, read – sequential, write – random and write – sequential.

To learn more about capacity management and how it can solve your capacity planning mistakes - download our capacity management primer

Introduction to Capacity Management