Share

At its simplest, managing risk is a matter of balancing the cost of doing something, versus the loss that could be incurred if you don't do it. For network managers, that can mean deciding whether to put resources into patching a new vulnerability just in case it gets exploited, or focusing them instead on more pressing problems.

The fact that many choose the latter path is often derided and criticised, but then again, software patches sometimes cause more problems than they fix, so there is risk in patching too.

But what if you didn't need to patch - and you knew it? A trivial example might be a worm that attacks over a specific port - if that behaviour is blocked at the firewall, and if you have endpoint security to prevent infected laptops from bringing it inside, then why patch all the servers against it?

It's a contentious argument, but it is exactly what's proposed by Edward Cooper, the world-wide marketing VP of security assessment company Skybox. He says that instead of trying to patch for everything, admins would do much better to work out which patches are necessary and which are unnecessary.

"Companies tend to get exposed because they focus on tight vulnerability scanning and high-priority patches," he says. "We find that 95 percent of your labour is wasted because those vulnerabilities are already blocked by your policies. Most high priority vulnerabilities will already be blocked by your firewall, your security policies, and so on."

The problem, of course, is how do you know which vulnerabilities are already blocked? As Cooper admits, modern networks are simply too big and complex for any normal human to get their head around all the possible openings and options.

"The biggest problem is the lack of visibility. People have made investments in security but they do not know how at-risk they are or where the risk is coming from," he says. "A typical network might have three-quarters of a million rules on it. If you understood those, then a patch could be done better by tweaking an ACL or a security rule in the firewall."

His answer, not surprisingly, is to automate the process, and that is what Skybox has done with its Skybox Assure appliance, which is designed to automatically discover everything that's in a network, and then create a model of it which can be used for vulnerability testing.

Network modelling
"We give you a virtual model of your infrastructure and through attack simulations we can find which vulnerability is really going to give you the greatest problem, and which patches should and shouldn't be done," he says. "Our customers see a 50 percent reduction in the number of patches needed.

"We have a threat and vulnerability dictionary, when a new vulnerability or worm appears we create a behaviour for it and import it to the dictionary. Within four hours we will have updated the customer model with it."

But is it practical to model a network, with all its attendant switches, servers, firewalls and other variables, especially given how complex each of those systems is in its own right?

It's not as difficult as it might seem, Cooper argues: "Modelling requires algorithms and expertise - but compared to modelling an airliner, modelling an IT network is child's play! People have modelled networks for performance or bottlenecks, but no-one had applied the idea to security risk analysis until we started in 2002."

He adds that the Skybox system includes information collectors, which periodically connect to other systems such as VPNs, firewalls, routers, switches and servers, for example pulling the routing and ACL (access control list) tables from a switch. Real-time collection is possible too, to warn if a newly connected system means that a risk is now possible, say.

That information is then compiled into a network model, along with knowledge of what vulnerabilities exist in each device, and what the traffic rules are within the network, and this can then be used to test attacks.

"An attack needs both a vulnerability and a route to get there," says Cooper. "Customers typically run attack scenarios overnight, based on the day's changes. The information comes from the network configuration and asset classifications - we have an engine to do that, plus our threat origin engine."

He adds that compared to electrical or mechanical engineering models, for which a set input will give a set output, network models involve probability analysis.

"A 100,000 node network might take an hour to analyse on a Windows PC," he says. "It needs a lot of memory - 1GB minimum."

The next problem, says Cooper, is that the network modelling approach itself requires considerable expertise and expense, which is why Skybox recently signed a deal with Verisign to provide Skybox Assure as a managed service.

Who's ready?
So how can an organisation tell if it is capable of getting into network modelling for itself?

"Number one is if the company is scanning for vulnerabilities already," Cooper says. "Number two is they already have a patch management system. Three, they are trying to quantify risk for regulatory needs or for business managers. Four, they have risk assessment processes already, such as a quarterly manual risk assessment, with a team collecting information on site and trying to analyse it."

But who would dare run without applying all the latest patches, if the resources to do it were there? The instinctive answer is 'no-one', and yet the risks inherent in software patching - the possibility of application incompatibilities or even newly introduced vulnerabilities - mean that no-one wants to patch before they are sure the patch is safe.

So something like Skybox could well be useful even if you plan to apply all the patches eventually, if only to tell you which to do now, and which can safely be left until they've been fully debugged by others.

"Symantec is a customer, so is Cisco - we had to prove to them that our analysis is valid," Cooper adds. "But lots of companies aren't yet at the stage where they can leverage technology like this - that's where our relationship with Verisign comes in."