With Brexit just around the corner, all manner of businesses, from retailers to banks are facing tightening of the purse strings. The marketplace is more competitive than ever, and the pressure to optimise costs without compromising performance is on – particularly in IT departments.
As part of the drive to deliver ever-leaner, more cost-effective services, many organisations are switching to the cloud. On the face of it, this looks like the path to cast-iron cost savings. However, as a recent RightScale report revealed, as much as 35% of all cloud spend globally may be wasted.
What’s more, many businesses are waking up to the fact that while the cloud is substantially less expensive than legacy infrastructure, costs can quickly spiral if left unchecked. This has meant cloud cost optimisation tends to get added to the ever-growing list of responsibilities for CIOs. However, with the urgency to deliver savings, there are some common mistakes made by CIOs in their optimisation efforts.
Here’s how to arm yourself against them and optimise your cloud costs with confidence.
Stopping at Rightsizing
The first place many CIOs begin when looking to cut costs is in rightsizing their cloud architecture. This is a completely reasonable approach; of course you should begin by reducing any excess capacity you might be using. The issues arise when you assume these are all the costs that can be cut and stop there.
However, as our work with clients has revealed, rightsizing only represents 20% of cost-saving opportunities, with a further 80% tied up in optimising cloud architecture and software.
There are 3 reasons why CIOs often miss the opportunity:
- There’s poor awareness that looking deeper can yield such rich rewards
- The prospect of optimising architecture and software that’s in use and vital to day-to-day operations understandably makes IT leaders a little nervous
- Even those who are aware simply don’t know how to approach it
But it doesn’t have to be this way, going further than rightsizing needn’t be as scary or challenging as it first appears. To truly move beyond palliative fixes like switching off unused instances or excess capacity you need two things: a good grasp of what efficiency looks like for your cloud architecture, with a strategy for getting there, and accurate forecasting for future use.
In some cases, you may already have the data, all you need is the right approach. By using your data for benchmarking you should, over time, be able to grasp what good efficiency looks like for your business. From this starting point, you’ll not only have the ability to identify where your architecture is falling short but also what you’re likely to use in future.
DIY cost optimisation strategy isn’t for everyone, in fact, it makes many CIOs sweaty-palmed at the thought of something going wrong. If you’re one of these IT leaders, it doesn’t mean you can’t tackle those harder-to-reach costs. A good capacity management specialist can help you develop a plan and show you how to implement it going forward, removing the need for you to experiment yourself.
Putting Too Much Faith in ‘Silver Bullet’ Solutions
Exciting though it is, the hope that new technology will provide the answer to all your budgeting woes is usually a forlorn one. Whether it’s auto scaling or capacity monitoring software like AWS CloudWatch, expecting these tools to deliver lasting cost savings is a, common, but ultimately misguided misconception.
While there’s plenty of great technology out there for tracking, monitoring, and optimising your costs – and it all works up-to-a-point – a simple truth remains: these tools are only as good as the user’s understanding of them. We regularly come across businesses using cost-and capacity monitoring programs with the best of intentions, only to discount half of the tool’s recommendations when they aren’t sure how to implement them.
There’s simply no shortcut to long-lasting and extensive cost optimisation. Optimisation software is great for getting you started, but without strategies for forecasting and efficiency to complement it, you’re unlikely to ever move beyond firefighting and rightsizing obvious problems.
A great example of this is our work with Ancestry, the leading brand in family history and consumer genomics. Unfortunately, upon migrating to AWS, infrastructure costs were higher and increasing faster than had been forecast. Ancestry’s team took steps to address this, using optimisation tools such as Cloudability to identify (and eliminate) unused capacity and, where needed, purchasing AWS Reserved Instances.
These actions delivered early savings, but still, cloud costs remained too high and only likely to increase with further migration. So, Ancestry called us in to offer a more radical solution.
Working in tandem with Ancestry’s in-house team, we were able to quickly identify a range of rightsizing and efficiency opportunities beyond those presented by Cloudability. In less than 6 months, and after analysing less than half of Ancestry’s services, we identified opportunities to deliver 40% of the total cost savings target – all using a combination of data analytics and efficiency benchmarks.
Being Reactive Instead of Proactive in Optimisation
Unlike other key concerns like security and performance – which are rightly monitored continuously – cost optimisation tends to be reactionary. For many businesses, cloud costs take a back seat until they begin to spiral and thus become noticeable. What then usually follows is a desperate scramble to optimise and cut a certain percentage before some arbitrary deadline, and, when the dust settles, normal service resumes and the process repeats.
If you want to truly take control; cloud cost optimisation should be an ongoing process rather than something you do periodically. In much the same way as you would implement a security plan over the course of years, if you’re hoping to deliver long-term, sustained cost savings then your optimisation strategy must be suitably forward-looking.
This means developing a method of identifying not just the capacity you actually need now, but what you’re likely to be using in one, two, or even five years. Again, the key to this is making use of the data you likely already have but perhaps aren’t using to its potential.
While there’s certainly nothing wrong with starting your cloud cost optimisation off with modest measures like rightsizing and the use of capacity monitoring tools, the real savings are to be found elsewhere. If you want lasting results and cloud architecture running at its most efficient, then the answer lies in developing accurate forecasting and a strategy for tackling inefficiency.