Anti virus software is frequently tested for performance, so picking a top product should be straightforward: Select the number-one vendor whose software kills off all of the evil things circulating on the Internet. You're good to go then, right? Not necessarily.
The increasing complexity of security software is causing vendors to gripe that current evaluations do not adequately test other technologies in the products designed to protect machines.
Relations between vendors and testing organizations are generally cordial but occasionally tense when a product fails a test. Representatives in both camps agree that the testing regimes need to be overhauled to give consumers a more accurate view of how different products compare.
"I don't think anyone believes the tests as they are run now ... are an accurate reflection of how one product relates to the other," said Mark Kennedy, an antivirus engineer with Symantec.
Representatives of Symantec, F-Secure and Panda Software agreed last month at the International Antivirus Testing Workshop in Reykjavik, Iceland, to design a new testing plan that would better reflect the capabilities of competing products. They hope all security vendors will agree on a new test that can be applied industry wide, Kennedy said.
A preliminary plan should be drawn up by September, Kennedy said.
One of the most common tests involves running a set of malicious software samples through a product's antivirus engine. The antivirus engine contains indicators, called signatures that enable it to identify harmful software.
But antivirus products have changed over the last couple years, and "now many products have other ways of detecting and blocking malware," said Toralv Dirron, security lead system engineer for McAfee.
Signature-based detection is important, but an explosion in the number of unique malicious software programs created by hackers is threatening its effectiveness. As a result, vendors have added overlapping defences to catch malware.