<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=1005900&amp;fmt=gif">


Is Your Cloud as Lean as You Think It is?

15th March 2019 by 
Danny Quilton Optimisation Cloud Cost

For most businesses looking to make the jump to the cloud, cost-saving ranks only below scalability and flexibility in importance. However, just because cloud services are costing you less than your legacy infrastructure, it doesn’t mean that you’re not overspending on extra capacity you don’t need. In fact, it’s very likely that you are.

Many IT leaders are guilty of buying into the illusion that on-demand cloud services are naturally cost-effective. It’s not hard to see why – at first glance – switching to the cloud looks like a sure-fire way to reduce costs, particularly when you compare it to the costs associated with physical servers.

In reality, there are plenty of hidden cost-saving opportunities to be found if you just scrape a little below the surface. With Gartner predicting that more than $1.3 trillion in IT spending will be directly or indirectly affected by the shift to the cloud by 2022, accurately mapping cloud spend has scarcely ever been more important for CIOs, yet it’s still poorly understood.

What Are You Really Spending?

There are a few key areas regularly overlooked by CIOs when it comes to cloud cost management. They range from the simple, such as overcompensating for peak usage, to more complex issues like software inefficiency.


Overprovisioning of architecture is incredibly common; some estimates – such as RightScale's ‘State of The Cloud’ survey – put the figure of spend wasted by cloud users each year at 35%.

The reasons for this are relatively simple. Firstly, in the interests of transitioning as quickly and painlessly as possible, some IT leaders simply perform a like-for-like swap – trading physical server space for the equivalent in the cloud. This can lead to bad-old-habits, such as storing duplicate data in multiple places, keeping redundant and unused data, and hosting rarely-accessed data in block storage, following you to the cloud.

Secondly, many businesses overcompensate for peak and future usage. Without a clear understanding of what’s being used currently and what is likely to be used in the future, it’s completely reasonable that to be on the safe side, many IT leaders purchase more server capacity than is actually needed. Perhaps unsurprisingly, they’re also encouraged to do so by the big cloud providers.

Thirdly, over time most organisations naturally develop workarounds and temporary fixes for parts of their IT – it's something of an occupational hazard. Switching to the cloud should be an opportunity to dispense with them, but it’s still not uncommon to see extra server capacity being used to compensate for badly performing software and bottlenecks.

Finally, oversizing is often down to CIOs not being armed with the data they need to make informed decisions. Whether due to poor demand forecasting or weak performance testing methodologies, IT leaders rarely have the tools needed to optimise spending properly.

Application Inelasticity

One of the great advantages of switching to the cloud is that, with capacity management tools like AWS Auto Scaling, it’s far easier to track and adjust capacity to meet demand.

However, these tools are only as effective as the applications they’re working with. Inefficient applications – such as databases and caches – frequently require long warm-up periods and do not scale quickly enough to use all the available capacity.

This leads to organisations spending extra on capacity headroom to compensate as they can’t be confident in applications scaling quickly enough to meet demand.

For instance, one of our recent clients had an embedded practice to auto-scale their systems at 50% CPU utilisation. Upon investigating, we discovered that this was costing the client $1M per year in unnecessary capacity spend for a single application.

Shadow IT

Do you have complete visibility of all the cloud services your business is using? Even beyond the IT department? Even if the answer is yes, it’s worth double-checking.

While you’d think that all IT spending would flow through and be overseen by the IT department, it just isn’t the case in some organisations. Many businesses allow anyone in the C-suite or management – or indeed anyone with their own budget– to purchase extra capacity.

The result can be akin to giving the company credit card to a junior employee at the Christmas work do: no one knows what the eventual bill will look like. Worse still, these ‘shadow IT’ departments may also stop using the service after a time but neglect to turn it off, leaving it to rack up extra cloud spend without providing any tangible business benefit.

Software Inefficiency 

Lastly, perhaps the most overlooked generator of superfluous cloud spend is software inefficiency. Software inefficiency – the amount of compute resource required per transaction or action – is crucial in controlling cloud costs, particularly if yours is a high-volume system.

Software inefficiency usually stems from one (or all) of three places: failing to measure efficiency at all, not having a good grasp on what ‘good efficiency’ looks like for the business in question and neglecting to set targets.

What Can You Do to Cut Extra Spend?

Chances are, at least one of these sources of additional cloud costs sounds familiar to you, even if your organisation is one of the few that is relatively good at tracking and optimising spending. But knowing where to start tackling the issue can be daunting, particularly if your organisation is a large one or relies upon a complex, high volume system.

The first step is to begin with easy fixes. These may sound obvious - and you may already have these activities in hand - but it's worth just running through the list as a foundation for more complex fixes:

  • Shutting down cloud capacity that isn’t being used
  • Leveraging discount options with your provider
  • Using more cost-effective regions for server storage
  • Running dev instances only when needed
  • Using reserved instances
  • Implementing budget controls

Software like AWS CloudWatch is a great place to begin rightsizing, although it should be stressed that it’s only as effective as your understanding of its recommendations. We often encounter organisations who use it – and competitors’ products like it – but only implement a fraction of its recommendations.

Equally, a large portion of the easy-wins in cloud cost optimisation can be tackled using a step-by-step methodology. There are a few different versions available, and which one is best-suited to your business will depend on your provider but, if you’re using AWS, our free guide is a great primer on how to get started.

However, while this is a great ‘jumping-off’ point for cutting cloud costs, the real, sustained savings are to be found in addressing software inefficiency and accurately forecasting future usage.

Both present problems for the average organisation. It can be difficult to address inefficiency if you don’t know what ‘good efficiency’ should look like for your business. Likewise, assessing what you’re likely to spend in future – accounting for peak trading periods, growth, and scalability – isn’t easy.

Ideally, your organisation would tackle this by creating its own library of benchmarks for efficiency and year-on-year spending. However, naturally, this takes time; building an accurate set of benchmarks can require years of work and most IT leaders don’t have years. One way to ensure you’re still able to deliver savings in the interim is to talk to a specialist.

Capacity management specialists can provide a shortcut to the data you need to accurately map future cloud spending and how efficient your software is. Most will maintain a library of benchmarking data indexed by organisation, software type, and function, which they will use to provide recommendations.


Cloud cost optimisation is far more complex than rightsizing. To be sure your cloud is really as lean as it could be, you need comprehensive insight into how your existing architecture should be performing, how it is performing, and how it’s likely to perform in the future. Anything else will only provide limited returns.