Share



When it comes to raising the IT pulse rate, there’s no HR-related topic quite so effective as the subject of the performance rating "bell curve". Why can no more than about 10% of us ever be regarded as turning in “outstanding” performances? Why must we always have a few percent who are “significantly under-performing”? What’s the effect on morale when people know they are competing against their colleagues for higher ratings?

These are excellent questions. They are best answered by imagining a world where no attempt was made to check consistency across the organisation. We all know managers who, given the freedom, would rate most of their team as “outstanding”. We also know managers who rate a world-beating performance as “meets expected standards”.

Some readers, like me, may have observed companies where people migrate from “tough” managers to “soft” managers, partly because they know they have a better chance of being promoted.

In short, having a target bell curve of some kind is the lesser of two considerable evils. Leaders, assisted by HR departments, have a responsibility to ensure consistency across their teams. Checking up on any performance rating used is a part of that.

When we researched this some time ago we realised that a big part of the problem arose because of inappropriate application of the laws of statistics. We constructed a computer simulation that threw up typical ratings distributions for various team sizes, based on a typical theoretical bell curve.

We found that in team sizes of four, the ratings distribution bore no resemblance to the theoretical ratings distribution. Double the team size to eight and the most we could say was that there should be at least a few “meets expected standards” in the team.

Double the team size again – to 16 – and we can be only slightly more prescriptive: about seven or more staff should be rated as “meets expected standards”, and it would be surprising if there was no-one better than that.

So it is unscientific to expect a high degree of consistency where the teams are not large. In fact, finding a high degree of consistency across small teams is evidence either that the ratings have been fixed, or that there is an implausibly efficient resource allocation process that systematically spreads good and bad performers across teams.

The conclusion from our study was that HR departments need to do more than just publish the theoretical bell curve. They must issue guidance saying roughly to what extent different team sizes can reasonably be expected to conform to that bell curve. Most IT people will buy into this rational approach.

Another important aspect of getting consistency is to use the right process. Good employers get their managers together with a list of proposed ratings and cross-check between teams to ensure standards are consistent. Does Joe from Desktop really rate an “A” when Jill from Operations only gets a “B”? Should Desktop have two A ratings out of four people when Application Development only gets five As out of 50 people?

There may be good answers to such questions – if so, all your managers should know what they are. Ratings are highly credible if they represent an informed consensus among all managers.

This can be an arduous and wearying process, especially in complex companies. One bank uses levelling reviews covering all IT staff in a particular location. Then each location’s proposed ratings are put together and another round of levelling takes place to ensure that a Singapore A-rating stacks up fairly against a London A-rating and so on.

Then some random global cross-checks are carried to ensure all the project managers’ or enterprise architects’ ratings are defensible if stacked against each other.

This can seem excessive, even to the HR people in the bank concerned. But there is nothing more demotivating than unfairness, and nothing more likely to trigger a resignation. As companies go global there are more and more comparisons to be drawn. It is no coincidence that the bank concerned has an excellent reputation for the quality of its people and its ability to hold on to them.

In conclusion, the bell curve in many companies needs to be more intelligently applied and more effort needs to be put into ensuring that the A and B ratings are going to the right people.

Good IT leaders invest quality time in this. There are beneficial side-effects, too: if you are to understand and foster talent, you need to know how much talent there is in your organisation – and what is happening to it.