A common inquiry request to Forrester is asking for benchmarks for quality. Testing groups are struggling to figure out how well they’re doing and if the processes they’re fighting for are making a difference.
QA value can be hard to define and to prove if development and project teams don’t regularly collect metrics to measure productivity or cost improvements. Because of this, QA managers are often on the short end of the stick to justify their existence.
A familiar story to any QA manager (and I was one once myself) is the one where during a planning meeting, an IT executive turns to him or her and says, “You guys sure are expensive and yet we still have bugs. Why should I invest anymore?” That then sends the manager scurrying to find out the best way to measure their effectiveness.
Without having a baseline or a ready understanding of current measures, one of the most visible ways is to compare your organisation to industry benchmarks.
Typical requests include:
- Defect density
- Average cost of defects
- Average cost to repair defects
- Defect removal efficiency
- Defect detection by phase
- Defect origin
There are a number of studies out there that provide “benchmark” data that tells us the average number of defects in a project (but doesn’t tell us the scope of the projects surveyed), the average defect density and the average defect by phase.
All of this data is interesting and it provides good bullets on a slide when communicating the importance of testing and quality assurance practices, but what it doesn’t tell us is just as important, if not more so.
Industry benchmarks look across a wide number of projects, but can they be compared to your projects? Many benchmarks, especially those published by Capers Jones provide great information, but they are based on function points.
Agreed, function points provide a very measurable way to determine scope and size of your applications, but how many organisations have standardised on function points to benchmark their applications?
Industry benchmarks should not be looked at for direct comparison, but as a guide, a way to provide a framework for looking at how you should measure your effectiveness.
To truly measure effectiveness, it’s not an overnight process – and you must measure by what matters to your organisation – are fewer defects being released? Are your customers or stakeholders happy with the quality?
Are you cutting down on your rework? Are you finding defects earlier in the cycle? Those measures are ones that should build your baseline.
Then, you can start applying context through cost alignment to defect detection and repair. Benchmarking against the industry is helpful but you have to be realistic that it’s not an apples to oranges comparison. Benchmarking against yourself is the most effective way.