One of the first things I try to do in any new security program is measure success. In order to do that, I need to find out what our current state is, and measure it. That gives me the ability to make quantifiable goals, such as "get 90% of our antivirus signatures up to date" or "patch 90% of our systems with all critical security updates."

It also gives me a baseline level that I can use to show progress, so when the CIO asks me what I've been doing all year, I can give a clear, quantitative answer. That's one way to approach return on investment - one of the hardest things to do in security is to show what value the company is getting for its investment in security, and numbers help with that.

So, I've been finding things to measure. One of those things is the state of our patching practices.

I consider patching to be one of the most important things a company can do to protect itself against viruses and malicious exploits. We've had some annoying virus outbreaks lately, such as the Conficker infection, and the impact of these could have been significantly reduced, and possibly eliminated altogether, if our company's computers had all the latest security fixes installed. And a really good way to get the right attention on the patching problem is to show how many of our systems are behind, and by how much, which is where statistics come in.

The good news is that our system administrators are not at all resistant to patching - a nice change of pace from other companies I've worked at.

In fact, my experiences in other companies had led me to be perhaps a little too cautious about approaching the subject. I met with our CIO last week and showed him the patching statistics I've gathered.

To my surprise, he didn't give me the expected response. I thought he would give me the usual pushback about how patching reduces system stability, or takes too many resources, or something like that. But instead, the CIO asked me why the problem hasn't been solved yet!

He figures patching is an elementary job for system administrators, and there isn't any good reason for us to be behind, other than negligence. I wasn't at all prepared for that response. I was ready to defend the idea of a solid patching program, and instead found myself on the spot to explain the lack of one.

So, I realised that there is a balance between analysis and action - too much data collection and number-crunching can slow things down, in an environment where the numbers aren't needed to convince people to do the right thing.

Not enough data can be equally bad, so my goal should be to strike a balance somewhere in between.

Now I need to focus my efforts on getting our patch situation improved.

On the plus side, I have plenty of good data to show how things will improve now that we're working on the problem. "Measure twice, cut once" is an old rule of thumb most of us have heard, but after measuring, it's time to make the cut.

This week's journal is written by a real security manager, "J.F. Rice," whose name and employer have been disguised for obvious reasons. Contact him at [email protected]