Dec 26, 2015

Should non-function requirements testing be done only for peak load?

There are some who argue that testing of non-function requirements (NFRs)  should only be performed at peak loads; positing that testing them at less than peak loads is a waste of time.  The argument goes that if the solution cannot meet the peak load requirements, then it is unsuitable anyway, so what’s the point of testing at less than peak loads? Implicit in the argument is that if it performs as required at peak load, then it will perform as required at normal load.

First, let’s investigate the assertion that if the NFRs do not meet their peak load specifications, then the solution is not of use to the organisation.

What does peak load mean?

In order to do this, we need to define what we mean by ‘peak load’. A peak, by definition, is the summit, the highest point, a singularity.  In the stock market, the price of a share fluctuates. One of those prices is the highest for that day. That is the peak (the ‘high’ for that day).  There are also highs for the current year, high on a year-on-year basis, and an all-time high. 

Do we mean that testing should be performed only for the very highest possible transaction? If so, what is that very highest possible transaction?  Is it the maximum that the solution is rated for?  This could well be something that occurs as a 1 in 25 year event.

In addition, does peak load refer to the peak volume that the environment is expected to provide or the peak volume that the solution is specified to handle. Suppose we are talking about concurrent users of a website. If the organisation has 20 million customers, a theoretical peak load might be all those customers simulatenously accessing the website.  Another theoretical peak might be the website being subjected to a denial of service attack.

The other peak load is the volume that the system is rated to meet. Suppose we decide that the maximum is 3 million concurrent users.  Do we then test the system for 3 million concurrent users or 20 million concurrent users?

Peak loads can occur once every 10 years

Depending on what definition of peak load was used, its occurence may happen so rarely and only for such a short time that it doesn’t matter.  Suppose the peak load happens only for half an hour on the 1st of January each year. If we tested only for that, then we are not testing the behaviour of the system 99.99% of the its time.

Perhaps by peak load, rather than a single point we want to use a percentile.  Say, the top 5% of the expected load.

What about the specifications for average loads?

Most requirements specifications dictate the performance required from the system at average loads, and also at heavier loads. Without testing for average loads, how would we verify that the solution has met the specification?  If we will not test at average loads, what’s the point of specifying performance for that? The solution may be performing as sluggishly at average loads as with peak loads – we wouldn’t know.

NFRs is not just about volume

Many  NFRs have nothing to do with volume transaction but will also have their analog to a peak concept.  Consider usability. Usability might cater to extremes, like being usable to a colour-blind person, or someone who cannot use a mouse.  Shall we test only those extremes with the argument that if it works well there, it works well for the average case? Of course not.  Think also about the extremes for maintainability, accessibility, availability, security, portability, and so on.

Conclusion

NFRs should be tested against all specification. If we specified performance at average levels, the requirements should be verified at average levels. 

No comments: