Thursday, 23 May 2013

No easy answers from ISO/IEC 29119

The new ‘definitive’ software testing standard may not meet the needs of systems architects.

The value of standards seems to be that they help you to ask for simple answers instead of complicated ones. For example, I don't have to ask for a detailed report on the combustion properties of the fuel in the petrol pump. If I really want to know that it’s the right kind of fuel, I just need to look on the pump for a label mentioning the standard ‘BS EN 228:2004’. Similarly, as an information systems architect, I don't have to ask for a detailed explanation of the remote management interface on a home router: for my purposes it's often enough to ask whether its supports the standard ‘TR-069’.

Sometimes standards allow you to ask questions that are so simple that they are binary: the 'does it comply' questions. Sometimes standards leave you asking for a little more than that: for answers about which options or levels within a standard have been complied with. But always, a standard reduces the complexity that its users have to deal with. It enables its users to engage at a superficial level, and leaves the complexity to the people who have to implement it. It’s difficult to imagine how a standard could help its users at all, otherwise.

This week I went to listen to Dr Stuart Reid speaking about ISO/IEC 29119, the forthcoming suite of software testing standards, at a BCS event. It was a really super talk. Software testing is perhaps not the most obviously exciting of topics, but Stuart’s explanation of the standards was marvellously lucid, and his tales of the process of standardization were mildly hair-raising. He should know, because he chaired the standardization committee.

I was at the talk because I’m a systems architect. My interest in software testing is unashamedly superficial. I don't want to know a lot of detail about how software has been tested. If I'm assessing the suitability of a piece of software for a given use, I'll be interested in what it does, what interfaces it supports, how well it's been developed and how well it's been tested.

Getting a simple answer to 'how well it's been tested' has always been difficult. Testers' answers are so often uninformative. Either they say too little, as in 'Yes Sir, it's been tested very well!', or they swamp the questioner with a load of technical detail that fails to directly address the question.

I went along to Dr Reid’s talk, full of naïve enthusiasm. If a new testing standard were to do one useful thing, surely that would be to enable a tester to tell me 'I've tested it to ISO/IEC 29119’, and for that statement to give me a useful indication of how well it's been tested. But no, that's not what ISO/IEC 29119 does. It defines process models and common terminology, but nowhere (if I've understood Dr Reid correctly) does it define standards for how well-tested a piece of software is.

Of course, to do that would be enormously difficult. Testing is complex and multifaceted, and to define any simple scale of degrees of testing would be very challenging. But that is exactly the sort of challenge that standards take on. BS EN 228:2004 cuts through the complexity of fuel quality definition to give me a single point of reference; TR-069 does the same for the complex and multifaceted world of device management.

I’m left with the feeling that ISO/IEC 29119 will help the testers by giving them some helpful processes to follow; it will help the consultants and the trainers, by giving them something new to be expert about: but it won’t help the end users, the people who want to know how well-tested some software is. Either I’ve missed the point, or the standard has. 

No comments:

Post a Comment