<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=1005900&amp;fmt=gif">

Insights

Are Virtualisation & Consolidation Effective Solutions?

The steady growth of distributed computing since 1980 has widened dramatically the ownership and use of information systems compared to the mainframe and minicomputer era. This diffusion of computing power has forced many companies to build significant data centres to house the growing number of non-mainframe servers within their IT estates. The trend toward decentralised computing has created new issues with accommodating large numbers of Unix or Windows servers, including:

  • the increased resource requirements for managing and maintaining the estate
  • the increased rack, floor-space and cabling requirements
  • the wide distribution of locally attached disks
  • the subsequent power delivery and cooling requirements, a growing environmental concern

A recent trend in data centres is the move towards using Blade servers. These are multiple small rack-mounted servers that exist within a common cabinet. Blades create the ability to provide large amounts of discrete physical servers in a small, more easily-managed footprint. The downside to this approach is the large amount of power required per cabinet, and the subsequent cooling required to dispel the heat generated, as each server still has CPU, memory etc. Blade computing is now pushing the boundaries of power delivery and the ability to cool a fully-populated cabinet.

 

Virtualisation and Consolidation on Distributed Platforms

There are two common methods to mitigate these scalability issues without migrating to a mainframe platform: firstly the use of server consolidation, whereby multiple applications are run on the same server via one common operating system - this is usually Unix-based due to some configuration limitations with Windows Server; secondly the use of virtualisation products, such as Microsoft Virtual Server or VMWare Server. These allow multiple guest operating systems to run under a single host operating system on a server, thereby appearing as separate discrete servers. These two methods are represented in Figure 1.

While virtualisation has been around on the mainframe platform for many decades it has only been used relatively recently on distributed Operating Systems (OS) such as Unix variants and Microsoft Windows Server. On Unix platforms server consolidation has been popular due to the ability of Unix to run many applications seamlessly. However consolidation using Microsoft Windows Server and its ancestors has traditionally been more difficult as different applications are often incompatible due to differing shared libraries. Running more than one instance of an application on a single operating system image, even Microsoft applications such as Exchange for example, is often impossible.

 

Server Consolidation

Users of Unix variant operating systems have been able to run multiple server applications side-by-side for years, but Microsoft Windows Server has proven more difficult unless applications are specifically designed to coexist. For example on Solaris while it was common to run multiple instances of Sybase SQL Server this was impossible with the corresponding Microsoft version of Sybase SQL Server on its respective NT platform, requiring a separate server for each SQL Server instance. However in recent years with the increase of Microsoft Server products in the data centre, and the increasing speed and scalability of Intel servers, it has become more necessary to conduct programmes of server consolidation, migrating multiple server-based applications onto single high-powered servers. The two major drawbacks to using this approach have been the need to model and monitor workloads (see later in this article) and the ability of applications to coexist.

Traditionally the typical problem with application coexistence has been the use of shared libraries, such as Dynamic Linked Libraries (DLLs). These are libraries of common function calls that many computer programs reuse. However as these are frequently updated over time they can often have backward compatibility issues. On Unix variant OSes it has always been relatively easy to work around this issue as each program can have its own versions of DLLs installed in a different directory and linked to at run-time. On Microsoft-based server platforms this has always been more difficult due to the use of the registry as the centralised 'database' recording the libraries needed for all installed applications. As this is centralised the act of installing two programs called 'SQLServer' would cause the second program's settings to overwrite the first program's settings in the registry (or it would if the common installer didn't think that the first program was being wrongly upgraded to the same version and prevent the installation!)

In summary server consolidation can be very useful for optimising server assets, but this method does need considerable management and testing for safe use; for example every time an application is upgraded it must be coexistence tested against all other applications (if there are n applications and mprogram changes per year this results in m(n-1) coexistence tests). However the advantage to using this method is that there is little CPU overhead in running multiple applications on the same server so this can be very efficient.

 

Virtualisation

Mainframes have for decades been able to run several Logical PARtitions (LPARs) or 'images' on the same server hardware. This technique divides up the memory space and processors (or processor time-slices) into what effectively look like different servers, all running their own copy of the operating system, or even different operating system versions. This has enabled more efficient use of the high-speed, scalable hardware used for mainframes. Additional performance benefits have included the fact that when two LPARs are communicating they are doing this inside the server (often in memory), rather than externally via a network, thereby increasing inter-LPAR communication by orders of magnitude over inter-server communication.

Until the current decade it has been difficult to undertake virtualisation on distributed platforms due to hardware performance limitations. However as the Unix and Windows server hardware platforms have improved dramatically in speed and scalability it has become possible, and desirable, to use this extra capacity to emulate multiple hardware servers. To do this has required the use of virtualisation or comparable hardware partitioning techniques on multiprocessor servers such as Sun Fire or Unisys ES7000 Servers.

Virtualisation requires a special program to load, either directly on the hardware itself such as VMWare ESX Server, or on top of an installed operating system. Then disk images containing 'guest' operating systems are built, loaded from pre-built images or migrated from other virtualised servers. While bringing many positive performance and capacity advantages the use of a common virtualisation layer disengages the guest operating systems from the hardware breaking the reliance on specific hardware drivers by replacing them with common virtual drivers. This enables any image to be moved to a different virtual server, or pre-built images containing popular software to be downloaded and installed in minutes (e.g. VMWare 'Appliances' include BEA WebLogic, IBM DB2, Red Hat Enterprise Linux).

Using modern virtualisation technologies customers have been able to migrate the workloads from many discrete servers onto a single virtualised server. Figures that have been quoted range from the workload of 10 to 18 separate servers being able to migrate comfortably onto a single virtual server. However it must be remembered that the virtualisation layer and the multiple guest operating systems all use processor and memory resources (see Figure 1 below); this overhead is often quoted as between 2% and 20%. Additionally, at the time of writing, few non-mainframe virtualisation solutions inherently contain the ability to manage workloads independently, so servers could easily be overloaded (e.g. CPU utilisation > 70%) causing response-time performance problems. However there are third-party solutions available to address this issue.

Choosing the Right Technique

While both of these methods are able to improve the overall cost efficiency of assets and optimise capacity, they do have drawbacks; primarily they both require application workloads to coexist without significant impact to the end-user (1). Additionally, although server consolidation is more efficient (2) it does require that the applications can coexist on the same operating system without interoperability issues, as discussed previously. Figure 2 demonstrates the possible choice of solutions depending on whether the applications and workloads are coexistent. As with all IT decisions the analysis of the current situation and future requirements should guide a decision; it should not be made because of a previously existing technological preference or political viewpoint.

 

Coexistent Workloads

For both virtualisation and consolidation the workloads should be measured and characterised to identify if they can coexist (see our blog on 'Conducting Workload Characterisation'). Without some form of monitoring, modelling and analysis it may be impossible to safely situate business critical workloads on a virtualised server; although it may be quick and easy to migrate server images and their workloads, in the event of poor service, a haphazard approach such as this should be avoided. It must be remembered that a service that may be considered relatively low-usage and low-priority, such as DNS, is business critical if its performance is impaired. Although it is tempting to locate DNS or other name services on a virtualised server there are many issues that would suggest this practice should be avoided, including performance and business continuity.

 

Coexistent Applications

As mentioned previously if server consolidation is being investigated it must first be ascertained if the applications under consideration can coexist on the same server. Application coexistence affects only server consolidation practices; in a virtualised environment each guest operating system acts as a completely separate server and so these issues are not relevant. The only way to be sure that applications can coexist is to test them in a representative environment.

 

Summary

As an important part of best-practice capacity management, and to ease environmental considerations such as the carbon footprint of a business, Capacitas recommends a strategy of consolidation and virtualisation wherever possible. This can dramatically assist in reducing the physical space and hardware maintenance required in a data centre. Although this may be up to 18:1 for servers it should be remembered that this is not the only equipment in a data centre. While LAN equipment may reduce by a similar ratio (as there are less servers to connect) the external storage is unlikely to reduce as dramatically (4), although storage has undergone its own virtualisation revolution with Storage Area Networks (SANs) in recent years.

This potential reduction in servers, and the corresponding reduction in their power hungry CPUs and dynamic memory, could significantly reduce the power and space requirements of a business at a time when both real-estate and electricity are at a premium. However it should be noted that careful analysis and modelling of workloads must be conducted and, if server consolidation is considered, the coexistence testing of applications must be undertaken. Adopting virtualisation and consolidation as an IT strategy would help most modern IT-centric businesses reduce typical data centre capacity constraints (e.g. power, space, cooling) and improve a business's environmental impact.

 

Footnotes

(1) Unless server capacity is specifically oversized to avoid this problem workload peaks must not be coincidental or they will have an adverse impact on application response times.
(2) This is due to only one operating system image running; virtualisation has a small but finite overhead for the virtualisation software and each guest operating system, reportedly between 2% and 20%.
(3) For example Aurema for VMWare.
(4) This depends on configuration of course - many distributed servers have plenty of spare capacity on the operating system partition, although they may be running out of space on their data partitions.