Risky business

Are software errors ever acceptable? In a recent study, responsibility for live errors in rolled-out applications is attributed to poor test management. The study, from a large management consultancy, offers an underlying message: test more. But...

Share

Are software errors ever acceptable? In a recent study, responsibility for live errors in rolled-out applications is attributed to poor test management. The study, from a large management consultancy, offers an underlying message: test more. But does it always make sense to test more? Quality at any price is not always the way forward. When testing goes hand in hand with systematic risk management, it can sometimes make sense to go live with untested features... although the risk must be clearly understood.

Automotive IT published the study story with the headline: “Study: deficient testing halts or delays projects”. Well, the story doesn’t tell us anything new - after all, everyone makes errors. The key messages, however, are quite different: “deadline pressure inevitably leads the majority of IT departments to software errors. Commonly, extensive testing means missing the deadline”; and “every second software error, which is discovered after a change has been adopted, [is a result] of insufficient test management.”

Since the study comes from one of the 10 largest providers of consultancy services, it is hardly surprising that the underlying message is: if errors emerge, then there wasn’t enough testing. So: test more. And: of course, give us a ring!

Two fundamental questions therefore spring to mind: first, does it always make sense to test a product until it can be deemed error-free? And secondly: why is test management only responsible for 50% of errors occurring during production? Where do the other 50% come from?

Let’s tackle the first point: more testing at any price? Even if in an ideal world we all expect error-free software, are we then willing to pay higher prices for it? For as long as it’s still on-trend to save money, I have my doubts. One error in a piece of software is often acceptable, if other parameters play a part in it. I still remember a 2008 study in which functionality was not the deciding criterion for purchase, but rather usability. The main thing for users is ease of use rather than the occasional error.

Weighing up the consequences

So, testing is not always necessary to the bitter (maybe almost error-free) end. But what is absolutely essential, of course, is transparency about the risks of not testing. Good test management has a list of all the test objects laid out at the beginning of the project; these are prioritised and the impact of possible errors is benchmarked. The results of these analyses are generally communicated to the project manager via risk management. If it comes to an extreme scenario where application malfunction has no (or few) negative effects, then it makes sense for the project manager to accept the risk and not test it. It could therefore be intentional for an application to exhibit errors after they go live, possibly because other advantages are more important (for example, being first to market). Sure, I’m happy to take the risk - if I know what the risk is to start with.

Does this mean test at any price? No. Only test as much as your business can support. A test management system should, at every stage, be able to show the risks of not testing or only ad hoc testing for individual test objects. Testing here is connected closely to risk management. A project manager makes the decision - an informed decision on the basis of maximum transparency - and it is absolutely legitimate to accept a risk. Transparency about qualitative risks comes from the test manager. What isn’t acceptable (because it’s uncontrollable) is going live without this kind of risk examination - because this could affect the entire business.

And so, this brings us to point two of the study: “Every second software error, which is discovered after a change has been adopted, [is a result] of insufficient test management”. The direct question to ask yourself is ‘how can I - if I want to (see above) - get above the 50% mark?’ This objective must also be achievable through the use of a test management system: almost 100% error-free!

Why only ‘almost’? Because at every step of test management, assumptions must be made that harbour risks: test objects are restricted, for example to the installed systems, and assume that the operating system is working. It’s assumed that tasks can only be carried out on the four most common browsers. It’s assumed that no bits of the hardware fail, for example. Naturally, these areas can also be integrated into test management, but since applications would take years to complete, people instead prefer to go down the route of carrying out risk assessments and to accept the smallest residual risks.

But 50%? Hardly likely… I think we could help out there. But then again I would hardly distinguish myself from the others if I simply asked you to… ‘give us a ring’!

Posted by Frank Simon, Head of Research, SQS.