<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=1005900&amp;fmt=gif">

Insights

12 Key Ways to Prepare for Performance Testing

Performance testing requires careful planning and preparation in order to ensure smooth running, and minimise delay.

Some factors are beyond the ability of the performance tester to influence. Examples of these include: software delivery, environment build and deployment, and software stability, test harness and stubs, etc. If the risk of these factors is high then planning contingency is necessary.

However, there are a number of decisions and performance preparation tasks that need to be resolved well in advance of the test execution phase. Some of these tasks can be complex and can involve external dependencies, and require budget provision and perhaps procurement. It is not uncommon for these tasks to be overlooked or minimised by project managers.

12 Key Ways to Prepare for Performance Testing:

1) Performance testing strategy

The strategy provides a framework for conducting performance testing. The strategy should be defined early in the development lifecycle once the requirements and architectural outline of the project are known. The following elements are highlighted,

  • Performance testing objectives
  • Scope (In-scope, out-scope, deferred items)
  • Testing methodology and process to be adopted (E.g. types of tests, frequency, regression baseline)
  • Analysis methodology to be applied.
  • Tools to be utilised and procurement requirements
  • Measurement and instrumentation strategy
  • Integration with project processes (E.g. configuration management, defect management, etc.)

The performance test strategy is a high level document and does not replace the more detailed performance test plan. It is worth considering if the performance test strategy should be combined with the overall test strategy owning to the synergies with operation, tooling, and other aspects (data, build availability, environments, etc.)

Download our guide to discover the benefits of performance testing and the  secrets of its delivery

2) Performance risk assessment

Performance risks assessment is conducted to establish the dominant performance risks of the system being changed. Risk is defined at various levels including: Infrastructure, system, service, component, function. Risk is rated of in terms of the likelihood of occurrence, and the potential impact of the change. Relevant mitigation activities are determined.

By exploiting this process it is possible to hone the performance testing scope to a manageable level, ensuring that testing costs are managed, and performance strategy is targeted in the right way, at the correct time. As importantly, the assessment provides a basis for gaining a consensus on scope.

Performance risks assessment provides input into the strategy and plan.

3) Performance Objectives and Goals

Performance objectives and goals provide realistic business and technical targets for gauging the outcome of performance test cases (i.e. pass/fail criteria). Performance objectives and goals should be linked back to the requirements included in non-functional requirements and Service Level Agreements (SLAs).

Performance objectives should be set early on in the project lifecycle prior to, and ideally as input into, the design stage. They are typically expressed in terms of:

  1. Business or technical throughput (Transactions per second/hour)
  2. End-to-end or technical response time (seconds)
  3. Session concurrency
  4. Availability
  5. Resource consumption (E.g. CPU utilisation %, I/O, memory)
  6. Load balancing

4) Workload characterisation

Performance testing requires that a realistic business workload is applied to the System Under Test (SUT). If the test workload has no basis in reality then test results will be meaningless. Workload characterisation defines the load and session concurrency profiles of the new production system. The output from the process is constituted of the following:

  • Normal and Peak Transaction load
  • Transaction mix profiles
  • Peak user concurrency

A workload model is built by leveraging the following information:

  • Non-functional requirements for performance, capacity, and availability
  • Performance objectives and goals
  • Service Level agreements
  • System and Service usage measures of the pre-existing systems and services
  • Validated assumptions about aspects of the workload (E.g. Peak to average volume ratios)

In conducting workload characterisation it is useful to consider how existing and legacy systems and services are used. Measurement and analysis of legacy systems can provide invaluable data for building or validating the workload model. If the business model of usage is changing this data is less helpful. In this situation, assumptions regarding usage may have to be made and tested at a later point.

Workload characterisation may take several days, sometimes weeks, and should be planned at an early point.

5) Test scenario design

Test scenarios are designed and linked with the workload model. The relevant performance objectives and goals are stated for each test scenario. This ensures that pass/fail criteria are understood. Scenarios are classified as follows:

  • Performance Baseline & Regression tests. Performance is measured and compared across future and past change sets (E.g. CPU service time, page size, response time, throughput, etc.).
  • Load tests. These tests measure the scalability characteristics of systems and services as transactional load is increased and identifies resource bottlenecks and transactional wait time. The tests can be used for validating the predictions of queuing models.
  • Sustained load “soak” tests. These tests demonstrate how performance varies over time when a sustained load is applied.
  • Stress tests. These tests expose systems and services to an abnormally high level of load, and identify resource bottlenecks.

6) Load generation capability

Performance testing requires a load generation capability in order to emulate a large workload. Whilst it is possible to orchestrate a large group of users to do the same thing, this approach has severe limitations. Ordinarily a load generation tool is used to capture and replay load. It is possible that your company has already made an investment in tooling. If not then other options will need to be considered. Decisions will need to be taken about your requirements. For example,

  • Virtual users numbers
  • Application protocols and their distribution across the virtual user population
  • Automation requirements
  • Scripting, execution, and analysis requirements
  • Available budget
  • Test execution window available

It is vital to examine the requirements at an early point as evaluation & procurement processes, if required, can be time-consuming.

Free load generation tools should not be rule out. For example, Apache JMeter is a free tool with a credible record. Capacitas have successfully deployed JMeter, and post regular blogs that describe how to get best value from JMeter testing.

7) Scripting proof-of-concept

Only once the application has been scripted can judgements be made on the level of complexity and the amount of additional script engineering time required. Each application has unique interactions.

For some projects it is prudent to plan a proof-of-concept at an early point once early code deployments are available. The objective is not to create the final test scripts and test load but rather to validate any planning assumptions already made and take action if further activities are required.

8) Script build and verification.

The amount of scripting work is usually contingent on the number of scripts, and the scripting complexity (journeys, protocols). It is also possible the external functions may need to be coded where the native scripting language does not provide equivalent functions. Timely preparation is therefore important.

It may be possible to start scripting work at an early point if the application is functionally stable, the user interface is unlikely to change significantly, and an environment is available. Functional environments can be used if the scripts are likely to port successfully. A judgement call is required on the timing. The scripting proof-of-concept may give some guidance.

9) Utilities and facilities

It is probable that additional facilities will be needed to support performance testing. Some examples are listed below,

  • Data generation tools
  • Database and file backup and restore utilities
  • Test stubs will be used where third-party systems are otherwise unavailable. These may need to be built from scratch or potentially purchased. It is important to plan to host the stub on separate infrastructure located close to the SUT. The stub itself must not use any resource on the SUT otherwise this may affect the outcome of the test.
  • Test harnesses are sometimes used in performance testing to simplify performance testing. These work cooperatively with the load generator to impart a load on the SUT. They act as an intermediary layer to simplify the scripting and execution process by calling application code directly, often presenting a simple UI to the generator. Test harnesses may have to be made available and potentially built from scratch or at least adapted. Test harnesses should not be sited on the SUT.
  • Utilities to provide test management such as results gatherers, summarisers, and automated analysis
  • Utilities to start/stop applications, middleware components, and databases
  • Deployment tools

Requirements for utilities and facilities should be developed and progress early in the project lifecycle preferably when the application code is being built.

10) Performance monitoring and instrumentation

Performance testing requires that the manifestation of performance on the SUT is quantitatively described in order to understand the relationship between the load drivers (E.g. user sessions, transactions, batch tasks) and the symptoms or performance. This is performance measurement.

Load testing tools are typically able to quantify and store test results that characterise the load drivers (Transactions, Users) in terms of their interaction with the SUT. Typically response times and transactions and throughput [volume per unit time] measures are available and a variety of statistics are presented to describe these interactions. Other useful measures of throughputs are available including [HTTP] hits, concurrent processes, Errors, Failures etc.

System, application, and database performance may be measured using a variety of tools. There is an arsenal of potential commercial and free options available to achieve this and selection can be a time-consuming process often taking several weeks. Some tools are quite demanding to install and an allowance may be needed for this.

Bespoke application instrumentation can be built into customised application code to measure the performance of visible and internal application components. Commercial tools also provide this capability but are expensive. If the bespoke route is preferred then the development requirements need to be included in the project scope, and then designed, built and verified.

11) Test data preparation

Performance testing is highly sensitive to the quantity, quality, and consistency of test data. Application data will be needed in sufficient volume, and with the correct attributes (seeding, indexing, coverage, etc.). If this data profile is invalid the pattern of data access will may be incorrect thus giving meaningless, often over-optimistic results

The process of generating test data may be time-consuming, requiring strong technical input on aspects of design, build, and testing. If projects are able to provide representative populations of data then this option is the logical choice. Sometimes data generation tools may be used.

This activity requires serious consideration and planning as a design process where data has to be created from scratch.

12) Performance test environment preparation

The best advice suggests that performance test environments should be as “Like-Live” as possible, cost permitting. Where compromises are required on specifications or loading factors then these may necessitate modelling or extrapolation to ensure that a meaningful performance result is obtained. The process of modelling is not straightforward and needs to be highly abstracted to achieve the best outcome. If modelling is to be pursued then model development and validation time should be included in the project plan.

Time to build the environment will be contingent on supplier’s schedules, where the environment does not pre-exist, and internal resources to build and configure the environment to meet the design. Once the environment is handed over to the performance testing team it is likely there will be residual configuration issues which one must expect. This is a significant challenge in most performance testing projects. The best advice is to pursue this activity from an early point with vigour in order to avoid delays.

To learn more about performance testing, such as its benefits and the secrets of delivery - download it here

Download the Performance Testing Primer