<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=1005900&amp;fmt=gif">

Insights

Stop these 6 mistakes in Cloud Capacity Management

11th June 2020 by 
Dr. Manzoor Mohammed Insight

Estimated read time: 3 Minutes

In our previous Insight we told you about 5 Warning signs that your approach to cloud is reactive.

What can you do once you identify the signs? You can stop making these 6 mistakes to start moving to a pro-active cloud model:

  1. Providing capacity for phoney demand
  2. Holding capacity for longer than necessary
  3. Using capacity to provide unnecessary service resilience or quality
  4. Adding excess cloud capacity to mask technical constraints
  5. Building inefficient software which consumes large amounts of capacity
  6. Using simplistic sizing models for future budget spend

Click me

  1. Providing capacity for phoney demand

    A significant part of demand is not genuine and engineers end up provisioning cloud capacity to meet it. Instead, you need to remove or reduce it.

  2. Holding capacity for longer than necessary

    It’s normal human nature to hoard, however, the cloud is not very forgiving for hoarders. Having a bloat in memory or in storage means you are paying for unnecessary cloud capacity, whether that is compute or storage.

  3. Using capacity to provide unnecessary service resilience or quality

    Engineers by their nature want to build the best possible system. But engineering resilience has a cost associated with it in the cloud. Recently, I saw big data system with a resilience factor of 3. It was built to provide gold levels of service when bronze would have been enough.

  4. Adding excess cloud capacity to mask technical constraints

    It’s common during a production incident to throw capacity at it until it goes away. This is acceptable as a temporary measure – but presents an issue when it becomes a permanent fix. Engineers don't understand what their users or their code are doing to trigger the issue.

  5. Building inefficient software which consumes large amounts of capacity

    Marc Andreessen, the US entrepreneur and investor, said on a recent podcast: "software programmers have had an easy ride for decades thanks to Moore’s law but must now raise their game. Software today is massively inefficient”. The first step to addressing inefficiency is to measure it. The second is make measurement part of the development process.

  6. Using simplistic sizing models for future budget spend

    These usually assumes capacity follows business growth. It’s just not true for lots of reasons. Some of these were mentioned earlier, i.e. Systems get more efficient as demand increase. In complex environments, systems grow at different rates etc depending on their driver.

Time spent investing in these quick wins will pay off in the long term. Creating a scalable cost model means lower cloud costs on an ongoing basis and reduced time wasted on unproductive reactive activities.


Schedule a Cloud Opportunity and Risk Assessment Call

View Free, Relevant Capacitas Insights

Whether you’re looking to optimise costs, improve agility or drive value creation, our expert insights can help you. Ready to start?

Explore Capacitas Insights