Vulnerability management tools do more than scan networks. Here's how to use them to detect and mitigate risk across the enterprise infrastructure.
Security smart organisations have gone well beyond thinking just in terms of assessing and addressing vulnerability, now vulnerability management is a cornerstone of their corporate security, risk and compliance programs. As views of IT risk have matured, so have vulnerability management tools, which now support a continuous enterprisewide lifecycle of vulnerability discovery, remediation and reporting.
The scope of products available has also expanded as regulatory compliance requirements increase and companies begin seeking more well-defined and strongly enforced change control. Vendors have also responded to the expanded threat landscape, in which network vulnerability scanning is still table stakes, but application-layer and even database security assessment and remediation have become essential.
The process of selecting of a vulnerability management product is far more complicated than answering the question, "Who makes the best vulnerability assessment scanner?"
A full-featured vulnerability management product or suite of products must be able to support, at minimum, a repeatable lifecycle of asset discovery and enumeration, vulnerability detection, risk assessment, configuration compliance assessment, change management and remediation, verification and auditing and reporting.
"The entire cycle of things you need to do is always ongoing," says a security manager at a major financial institution that uses McAfee vulnerability management products. "The tail end is that once you are done with remediation, you have to continue to repeat the process. You have to do it consistently and on a regular basis".
Most of the major players have been in this market for years, as product vendors, service providers or a combination of the two, giving you a broad range of companies with deep expertise and long track records to choose from. Vendors include Beyond Security, Critical Watch, eEye, GFI, IBM, Lumension Security, McAfee, nCircle, Perimeter e-Security, Qualys, Rapid7 and Tenable Network Security (creators of the formerly open source Nessus scanner).
The essentials of vulnerability management
At their foundation, vulnerability management tools perform two basic tasks: They help you discover the assets across your networks and they detect vulnerabilities, typically in the operating systems and key applications.
The discovery phase is worthwhile in itself, though organisations often overlook its importance, settling for working from what they know, or more accurately what they think they know, about what's on their network.
"You may think you have solid network inventory," says the security manager at the financial institution. "A company says they don't have wireless. How do they know if they aren't monitoring with wireless intrusion detection systems? They will be dinged by an auditor that asks that kind of question."
Vulnerability management tools scan the network for hosts, enumerate network services and use a variety of techniques to determine possible vulnerabilities, including obtaining banner information (such as OS and web server version), determining port status, protocol compliance and service behavior. If an open port is discovered, they check the asset status against their database of vulnerability signatures.
Passive scans, which rely on monitoring network traffic, are particularly useful for basic asset discovery and enumeration because they are quick and completely unobtrusive. They can also be used to supplement active scanning, in particular when you don't have the time or authorisation to perform an aggressive scan on a live production network.
"You can learn, for example, where all your email and DNS servers are, you'd be surprised how hard that is in a large organisation," says Ron Gula, CEO and CTO of Tenable Security. "A valid DNS server behind a [network address translation]-enabled firewall may be accepting DNS queries, but not from a scanner. Being able to passively find all that information is very useful."
Active scans may be may be performed with or without authentication credentials. Credentials allow scanners to log in to a server or other asset with administrative rights and gain extensive, detailed information. These scans are very accurate, many vulnerabilities cannot be verified without authenticated access and they are necessary when the scanner is gathering configuration information. They are also more time-consuming and potentially disruptive, and add security overhead because you must safely manage those credentials.
Vulnerability management tools typically support both passive and active scans.
Agent versus agentless vulnerability scanning
The religious wars among vendors and end users, however, revolve around agent-based versus agentless scanning.
On the plus side, agents:
- Are always on, allowing you to obtain information without launching a scan.
- Provide highly detailed configuration, OS, service and application information and reduce the chance of false positives.
- Are not disruptive.
- Are not susceptible to the occasional failures of active scans (firewall interference, dropped packets and so on).
On the down side, agents:
- Have to be managed, which requires organisations to take on the overhead of handling yet another agent on their machines.
- Can cause conflicts, depending on what else is running.
- May be prohibited by regulation or policy on certain machines.
- Cannot be used on devices that do not have interfaces that support them, including network devices such as routers and switches.
- Can only be placed on known managed devices, so you still need scanning, at a minimum, for asset discovery.
"Vulnerability-assessment technology needs to be able to deploy an agent whenever possible and support agentless where it is not possible," says Chenxi Wang, a principal analyst for Forrester Research. "Agents give a much more in-depth view of the configuration of a device and the services running on it. There's no comparison in the information you are able to see."
An in-between approach is to use temporary agents, which can be placed on a target device to gather information in the absence of a scan, then deleted when the job is done.
Patching and beyond
Supporting the vulnerability management lifecycle: The core use case is still security patching. You find out which servers, routers and so forth that are on your network are running OSes or apps with particular vulnerabilities so you can apply the correct security patches. Then you re-scan to determine if the patch was applied successfully.
However, patching is likely to be inefficient and somewhat hit-or-miss without a formal vulnerability management program with well-defined policies and business risk assessment and change control processes. Mature enterprise grade vulnerability management tools provide:
- Asset discovery, giving organisations as complete a list as possible of authorised and rogue devices and applications on their networks.
- Vulnerability detection, via agent-based or agentless technology.
- Risk assessment, based on a combination of the severity of known vulnerabilities, the likelihood of exploit and the value the organisation assigns to the vulnerable asset. This enables the prioritisation of remediation.
- Change control. This is where the rubber meets the road. The tool should send a work ticket through whatever system the company uses to the appropriate operations personnel for remediation, with a priority based on risk assessment. The change control process should be able to verify that remediation has been performed and verified by a re-scan before the ticket can be closed.
- Auditing for regulatory compliance and internal reporting. The tool should provide both out-of-the-box and customisable reporting that can map vulnerability management data to applicable regulatory and policy requirements.
One can debate whether the term "vulnerability" should be applied only to exploitable weaknesses in code, or if it should be extended to configuration errors or failures to follow configuration best practices. Nevertheless, failing to securely configure devices, OSes and even applications in compliance with regulatory controls and corporate policy can leave them vulnerable to attack.
"Vulnerabilities come and go. You're going to have them," says Renaud Deraison, Tenable's chief research officer and the creator of Nessus. "Configuration auditing is making sure security strategy is heading in the right direction, whereas patching is more day-to-day."
Regulations are a key driver, requiring companies to follow standard secure configuration practices for everything, including default administrative passwords, enabled or disabled services and appropriate cipher strength. Most vulnerability management vendors have therefore expanded their capabilities to encompass configuration detection.
Asset configuration has to be determined by authenticated scanning or agents and compared to regulatory guidelines, policy and usually a gold standard for the device (for example, an Apache web server or a Cisco router). It's a tedious business without an automated tool.
"In previous companies that I worked for, I actually would go manually and check certain configurations," says Shaheen Abdul Jabbar, a security consultant and author of the Snajsoft.com blog. "If I'd had an automated tool, it would have saved my day."
The detection-remediation-audit cycle is essentially the same for vulnerabilities. The core difference is establishing a baseline secure configuration rather than just detecting a coding flaw.
"Configuration checks against a baseline, a minimum target to meet, are critically important," says the financial institution's security manager. "I go to great pains for the standard implementation I want. If there's a major deviation, I want to know right away".
Scanning applications and databases
Many vulnerability management products have further expanded their range of capabilities to include web application and, in some cases, database vulnerability scanning. While network scanning is still the core function, leading vendors are adopting the attitude that a vulnerability is a vulnerability is a vulnerability.
"I should be able to come to one tool and say, 'I want to scan this box for everything: applications, network, OS patches' because I care about vulnerabilities, period," says the security manager.
Nevertheless, application vulnerability testing is complicated and difficult. Dynamic testing of web applications, for example, requires crawling sites and testing a wide and complex assortment of potential weak spots, such as myriad combinations of input possibilities, to determine if there is an exploitable vulnerability.
The majority of vulnerabilities are found at the application layer, and that's where attackers focus most of their energy. But organisations have been very slow to respond to this threat vector. As businesses turn up the heat to bring new web applications online, developers are under intense time and budget pressure to focus on feature functionality, not security. The result is that large organisations have thousands of applications that are rife with security flaws, and few enterprises have anything resembling secure software development lifecycles.
Application vulnerabilities are difficult to detect and take a lot of time and money to remediate, especially when compared to the ease of patching a Windows vulnerability or correcting a configuration error.
Moreover, the application security market has long been the province of highly specialised product and service providers who have the application expertise and experience that remains in short supply elsewhere. So why look for application scanning in a vulnerability management tool?
Compliance is a big factor. Enterprises can at least take a first cut at adhering to regulations that now require application security measures.
"Because of compliance, the CSO has to make sure that these applications are deployed securely in production environments," says Amer Deeba, chief marketing officer at Qualys. "Vulnerability management is converging to bring it all together in one centralised view." So, he says, network administrators can deal with network vulnerability scan results, and application owners can work with developers to remediate vulnerabilities found by the application scanner.
"But the security organisation and the CISO is responsible for the security for it all," he adds.
Database vulnerability scanning is somewhat less in demand. Databases are generally relatively insulated on the back end of corporate IT infrastructure and are exposed to attack primarily through the public-facing applications they interact with. For that reason, the prime security focus is, correctly, on how to keep attackers from accessing the database through the applications.
Database security, and the regulatory controls governing it, have focused primarily on controlling privileged insiders through separation of duties, strong access control and activity monitoring.