In any development project there are three key metrics to measure and control - productivity, quality and duration.

When it comes to software application development, it is crucial at every stage of the project, from planning to production, to have an overview of whether or not it is unfolding in accordance with your time and budget projections.

In order to measure this progress, there are three key performance indicators (KPIs) to watch. This triad of software benchmarking criteria are: Productivity, Quality and Duration (PQD).

The KPI Trio

In terms of Productivity, this can be analysed by looking at how many units of code is being produced each hour, day or other specified unit of time. The Quality of a project is essentially measured by looking at the percentage of defects that have been removed during the development process and how many remain at the product’s final release.

Duration is calculated by looking at the number of staff required to do a task within a unit of time, say an hour or a week. Theoretically, the more people dedicated to a project, the less time it should take. However this equation only works up to a point. There is a critical point beyond which a task suffers diminishing returns.

That is, no matter how many people are thrown at a project there comes a point where the result is not going to get any better or be done any faster.. it will just cost more money.

Normative benchmarks

Having obtained Productivity, Quality and Duration metrics for your software project, the next thing is to assess how well these KPIs stack up against the industry norm, or average median value. For example, a normative benchmark for productivity determines that this type of project, or component, can be done at an average rate of X number of software function points (application elements) produced per person per month. If your results surpass this, does it mean that your project is better than average in at least one of the KPIs?

How ‘normative’ is the norm?

How universal are the norms against which you are measuring your PQD data?

This depends on who is doing the benchmarking and whether they are using truly universal benchmarking framework for evaluating software development organisations (eg. America’s CMMI), or are measuring the project/provider against a particular subset: say, all financial services software produced in India in 2007.

Or, is the scope of the comparison limited to all of the other projects done by the company undertaking the benchmarking? Clearly the results will vary.

Defining your parameters

Normative benchmarking is only as good as the parameters being used for comparison. It is very important, therefore, to define the universe against which you are comparing your software development provider or project.

What is the anatomy of your benchmark criteria, and how do you decide what this should be? Chances are it will be different for every job.

At Metri and SPR we specialise in providing PQD industry averages that are cross-sectioned according market, type and location of project, or any other benchmarking subset a client may require - such as a specific set of peers. That said, it’s an ongoing challenge to find absolute market norms and for this reason it is often said that industry averages are ‘in the eye of the beholder’.

Objective vs normative benchmarks

The only way to combat this relativity trap is to look for objective, rather than normative, metrics. How do we find these? By more narrowly defining the parameters and getting more specific.

For example, if your aim is to work with - or to be - the highest quality producer of ERP software within the financial services industry, then you only need to compare your product against the universe of ERP software providers.

But you can drill-down even further because you probably really only want to be measured against those who are successful. Remember, normative is normal, so normative benchmarks only reveal the average. For those organisations or suppliers who want to surpass best-in-class, they must be able to identify the top 5 or 10%.

Perfection may not be the goal

There is, however, a trap here because few organisations can realistically attain best-in-class status within the confines of their resources and budget, and for some the extra investment to achieve it may not benefit the business objectives. For others, as long as they are above average and comply with the norm, they are satisfied.

Therefore, when it comes to benchmarking software development programmes, whether you are the customer or supplier, PQD benchmarks may not reveal that you are a gold medal winner, but it may well determine that your project is on course to finish the marathon intact. Which, in itself, is no small achievement!

Michael Bragen, is Senior Consultant at Software Productivity Research. Paul Michaels is Director of Consulting at Metri