Centre of excellence. What a phrase. The NASA Space Center has truly been a centre of excellence, making its reputation not only from its successes but for the manner in which it reacts to failure.
It has learned lessons from each failure and then striven to ensure that not only were such failures not repeated, but that the lessons were applied to future successful missions.
Taking such an attitude to the more humble discipline of application quality creates an enticing prospect. Let’s gather our finest minds to define best practice, to create new working methods and implement a set of tools to deliver that best practice in the most efficient manner and achieve the greatest and fastest ROI on the resource employed.
So it’s hard to argue with the creation of Testing Centres of Excellence. Or is it? Let’s consider three reasons driving the creation of these centres.
Economies of scale: the global economy may be recovering but few businesses believe that the land of milk and honey is here again. So costs and particularly head count remain tightly controlled. But that is coupled with the fact that the business is equally hungry for new IT systems to hone their competitive edge and underpin their recovery. Doing more with the same finite resources is a challenge and by centralising testing skills, a more efficient allocation of those resources may be achieved.
Focus: newer development methodologies are replacing or co-existing with traditional waterfall developments. Agile, Kanban and the like have radically altered the relationship between development, quality assurance and user acceptance testing, so perhaps it is wise to absorb the quality challenge into a dedicated group who can figure out the best way forward.
Skills: and besides it is complex stuff with the tools themselves often ill-suited to the challenge. What chance does a regular QA team have of successfully executing an agile development using legacy tools from the likes of HP, IBM or Borland? Perhaps a specialised team in a ‘centre of excellence’ can make these tools work even when they have historically failed.
The reality is different.
Good waterfall, agile and any other developments are based on excellent communication between everyone involved. Agile teaches us that ideally developers, testers and end users should all be permanently in the same room to ensure perfect alignment between need and delivered application. Quite how this can be achieved when one group is mentally, physically or organisationally partitioned away is anyone’s guess.
If the tool is hard to use or ill-suited to the task in hand it is simply the wrong tool. Man up. Tools developed for the challenges of the late 20th century are by definition unlikely to solve the problems we face twenty years later. If you’re spending a chunk of your time developing the tool rather than focusing on the quality task at hand, it is by definition unfit for purpose.
So are TCOE’s a bad thing? It all depends why they have been created.
If the goal is to get the quality leaders together and to continually evolve best practice then there is real benefit.
If the goal is to select the most suitable testing technology and map it to the agreed best practice then this will form the basis of enhanced communication and productivity across all projects.
If the goal is to disseminate knowledge by placing a quality leader in each development project to train their cohorts and to communicate the lessons learned, then this will enhance quality while keeping developments aligned to the business need.
However, if the reality is that TCOEs are the result of throwing labour at QA in compensation for outdated testing technology, the result will be a growing gap in the ability of the business to meet the fast evolving demands of its customers. Corporates need to get test smart and quickly.
Posted by George Wilson, a founding director at Original Software