Implementing the appropriate software load testing workload.

Given an appropriate (for the project) software load testing workload specification the next step is to:-.

Create the load testing scripts that meet the workload specifications.

Reproducing the defined workload.


Given a specification of What workload is required, to satisfy the load testing project objectives, the workload behavior needs to be implemented in a load testing tool.

Mapping the workload requirements to a given tool will vary depending on the load testing tool’s architecture.

Here are some general workload characteristics accompanied with a discussion as to how various load testing tools implement the given characteristic. These workload characteristics have been selected by way of giving examples of the type of issues, tradeoffs and potential solutions encountered when implementing workload requirements in a given load testing tool.

User equivalents

This characteristic represents a user that is performing some activity against the software under test. This characteristic is also referred to as a Virtual User.

Typically User Equivalents are implemented as concurrently running ‘threads’ within the load testing program. Jmeter, by way of example, has a Thread attribute and the number of threads will approximate the number of required User equivalents.

Requests per Second (rate)

This characteristic represents the rate of requests made from a given User equivalent.

Within the threaded model (i.e. Jmeter), where a thread simulates a User equivalent, it is common to control the requests per second by using a wait within the load testing script. The wait effectively governs the rate at which requests are being made to the software under test. In order to increase the requests per second beyond the time it takes for the server to respond to a single request, multiple concurrent User Equivalents (threads) are executed. Consider an example workload definition of a population of 100 concurrent Users placing orders at an overall rate of 10 requests per second. In order to achieve the 10 requests per second rate for the population of Users, each user would need to wait for approximately 10 seconds. The calculation is individual rate (0.1 requests per second) * Number of concurrent users (100) gives 0.1 * 100 = 10 requests per second for the population of users.

As can be seen achieving a higher rate requires executing more concurrent users or reducing the individual wait times within the scripts.

A question that often comes up, regarding the relationship between concurrent users (threads) and requests per second is:-

I want to simulate 1,000 users each with an average wait of 10 seconds to achieve an overall request per second of 100 (0.1 * 1,000). Can I use 500 users with a wait of just 5 seconds to achieve the same rate (0.05 * 500) = 100 requests per second.

The answer to these types of questions is always it depends. There are specific resource usage issues that will be encountered with more users, including cached data, security checks, TCP connections etc. If the user population is specified in the workload requirements then this number should be implemented in the workload implementation for the given load testing tool.

Web Users connection strategy

Having touched on both User Equivalents and Requests Per Second it is worth noting that, for simulating Web server traffic, consideration should be made as to how the given load testing tool handles TCP connections when retrieving URL’s via HTTP requests. By way of illustration, consider the Place Order transaction (web based) as an example of a user function we want to subject to a load test. In the workload requirements we might have the following specified:-
  • Function: Place Order.
  • Concurrent Users: 700
  • Total User population: 7,000
  • Average time each user is logged on: 1 hour.
  • Rate (per User): 1 every 60 seconds (plus or minus 15 seconds).
  • Duration: 6 Hours
If this is a web based function then we will most likely perform a (HTTP) GET on the web page where the blank Order form is presented followed by a POST that places the actual order. Whenever a HTTP GET is issued the web browser will initially retrieve the main page and then retrieve all the secondary urls (gifs, java scripts, ccs resources etc.) so that the complete page can be presented to the user. The retrieving of these so called embedded references are handled by other concurrent TCP connections.

Each load testing tool will have its own implementation for handling embedded references. Some may reuse the original TCP connection and retrieve these resources serially rather than in parallel on concurrent TCP connections. Other load testing tools might try and replicate the Web Browser’s connection strategy for handling these secondary resources, by opening additional TCP connections for each Virtual User.

The potential extra TCP connections utilized for the additional embedded references will consume additional resident memory and use the TCP pool from the server. The use of these additional resources might or might not be material to the objectives of the given load test. In any event the Web connection strategy of the given load testing tool should be understood by the test architect so that the workload requirements can be implemented as specified. In the case where the tool does not support opening multiple TCP connections during the retrieval of embedded references, supplementary load could be placed on the server (in parallel) to simulate these extra TCP connections and url retrievals.

Summary

Following the definition of the workload requirements that will satisfy a given load testing project’s objectives it is important to be able to implement that workload as specified and not compromise its resource usage (impact) on the software under test. Achieving the satisfaction of the workload requirements involves a good understanding of available load testing tools as well how the client software (being simulated) interacts with the software under test. Examples of User equivalents, Requests per Second and Web connection strategies have been given to promote an understanding of these issues together with the tradeoffs and potential ‘work arounds’ that could be considered when trying to satisfy a given set of workload requirements.