Thursday, July 24, 2008

Performance Testing - An Overview

Performance Testing, Load Testing, Stress Testing, Volume Testing etc are terms which are normally used interchangeably. Here a humble attempt to provide details what each of these are exactly and how to approach! (Not sure how far I succeed in this attempt...)

Performance Testing : The performance testing is to determine the speed or effectiveness of a computer, network or software program or application. The goal of performance testing is to identify the bottlenecks of your application performance and tune/optimize it. Performance testing means the entire process of testing the performance of application which includes load testing, endurance testing, stress testing, etc.

Load Testing : A load test is usually conducted to understand how the application behaves under a specific load. Say in case of a web application, this load can be the number of concurrent users accessing the application.

Stress Testing : The intention of stress test is to break the application and determine the robustness of application at the extreme load. The main focus of stress testing is to analyze the failure & recovery of application when the load is more than expected or double to the expected load.

Performance Testing can be done at different levels listed below

  • At the application level: Performance Testing Engineers can use profilers to spot inefficiencies in the performance of code
  • At the database level: Performance Testing Engineers can use database-specific profilers or JDBC requests or query optimizers to identify the performance leakage at database level.
  • At the Server level: The performance issues at this level can be identified by monitoring the hardware resources such as CPU, memory, swap, disk I/O.
  • At the network level: This will mostly be not in scope of test team. network engineers can use packet sniffers, network protocol analyzers such as ethereal etc to identify performance issues at this level.

Pre-requisites for Performance Testing : A Stable build of an application is the first requisite for going for performance Testing. It is advisable to have separate test environment for performance testing which must resemble to the production/live environment as much as possible.

Defining the Performance Goal and Objective :

A completely defined set of expectations is essential for meaningful performance testing. If you don't know where you want to be in terms of the performance of your application, then it effect in which direction you take the application with performance testing. So the first thing is to define the expectation, goal and objective of performance testing. For example, the below could be some the items that requires to be defined well for a web application at this stage.

  • The maximum number of concurrent users or HTTP connections, the system should support for accessing the pages in application.
  • The maximum number of concurrent users, performs the critical transactions at a time.
  • The acceptable response time(Maximum value) for loading normal pages in application(in seconds)
  • The acceptable response time(Maximum value) for complex processes and generating reports in application(in seconds)
  • The expected time for processing a request with different volume of data (say search of 100/200/500/1000 employees)

Once you know where you want to be, you can start on your way there…!

Performance Tool Evaluation:

There are tools which are capable of simulating the HTTP/HTTPS requests generated by hundreds or even thousands of simultaneous users and thus allow you to test performance of application under such situations. Some of the tools available are LoadRunner, SilkPerformer, WebLoad, JMeter from Jakarta, OpenSTA from CYRANO, QEngine from AdventNet, WAPT from SoftLogica etc. There are many attributes which has to be considered during the performance testing tool evaluation and some of them are...

  • Supporting Protocols(say HTTP/HTTPS etc) and supporting platforms
  • Maximum virtual users allowed
  • Record and playback feature
  • Supporting scripting languages
  • Server Performance/Data base Performance Monitoring
  • IP Spoofing and Proxy Support of the tool
  • Support of Distributed Load Testing
  • Performance Result/Reports provided by the tool
  • Support of real world performance testing and configurable user think time

Performance Testing Approach:

The simple and generalized approach for doing performance testing with any Tool can be as detailed below.

Identifying Key Scenarios : At this step, we will identify all possible application scenarios and transactions that are going to be performance/load/stress tested.

Identifying Work Load : Here, we will be identifying the work load that we want to apply to the scenarios identified at above step. This step also includes identifying how the profile should be created for the identified scenarios.

Preparing the Scripts : Using the tool, we will capture the each scenarios identified in above step and generate the test scripts. The scripts recorded by tool will be require modifications according to the need of the test. At this stage, we will also add required assertions, result capturing listeners, controllers etc to the recorded scripts.

Configuring the Test : Using the Performance tool, we will be configuring how the requests will be run against the server, like number of concurrent users, ramp time period, Loop count etc. In some cases, it is good to configure running different profiles at a time, and also add user think timing in order to suit it to a more practical way.

Performance Test Excecution : Once we have created all the test scripts and configured all the settings, we will perform the testing at this phase and capture the results.

Reporting and Analysis : At the end of the test execution, we need to gather all the test reports provided by tool including the total number of requests, requests per second and failures. There’s also additional detail about each link and average request times of each link. The basic result data that we look for in any performance test will be

  • Throughput versus user load.
  • Response time versus user load.

Throughput: Throughput is the number of requests that can be served by our application per unit time. It can vary depending upon the load (number of users) and the type of user activity applied to the server. This can be obtained from the Requests per Second.

Response Time (Latency): Response time is the amount of time taken to respond to a request. The latency measured at the client includes the request queue, plus the time taken by the server to complete the execution of the request and the network latency.

Other than the above, there are many other result data you get from the tool. Once we capture the data and analyze the captured data with respect to our application's performance objectives, the next level is performance tuning.

Performance Tuning :

When the results of the test indicate that performance of the system does not match the expected goals, it is time for performance tuning. The performance Tuning should be starting with the application and the database. You want to make sure your code runs as efficiently as possible and your database is optimized on a given OS/hardware configurations.

After tuning the application and the database, if the system still doesn't meet its expected goals in terms of performance, then a wide range of tuning procedures is available at the all the levels discussed above. Below are some examples of things you can do to enhance the performance of a Web application when the bottleneck is beyond the application code:

  • Scale the Web server farm horizontally via load balancing
  • Scale the database servers horizontally and split them into read/write servers and read-only servers, then load balance the read-only servers
  • Scale the Web and database servers vertically, by adding more hardware resources (CPU, RAM, disks)
  • Increase the available network bandwidth

I believe, I have tried to put together all points atleast at a high level. Thanks to Google for it's excellent search engine built, with out that it could have been highly difficult for me to collect/learn these details!

:)