Share

Security vendor Carbon Black has issued a report warning businesses not to place too much trust in machine learning-based security products.

The company surveyed 400 non-vendor security professionals who overwhelmingly agreed AI-equipped technology is in its nascent phase, and so organisations must proceed with caution when adopting any such products.

Although AI-based technologies do have their place, it would be a mistake for businesses to buy into vendor hype or over-rely on these systems, the report said.

The findings echo a recent report from ABI Research that dedicated a significant section to warning against vendors peddling machine learning as 'snake oil'.

Security professionals cited high false positive rates and the ease with which machine learning-based technologies can be bypassed – at present – as the most serious barriers to adoption.

Respondents also said that the high false positive rate could have other negative impacts on operations, such as considerable slowdown if a team of researchers finds itself having to sift through and check against each of these.

Of course, the other side is that there will be plenty of customers who find machine learning-enabled security invaluable, especially in smaller organisations where the security team might be the same as the IT team, where automated processes are especially important.

See also: Church of England puts a stop to ransomware with Darktrace

But Carbon Black says that at present, machine learning and artificial intelligence technologies should be seen as a way to augment processes rather than as a wholesale solution.

According to the report: "AI technology can be useful in helping humans parse through significant amounts of data. What once took days or weeks can be done by AI in a matter of minutes or hours. That’s certainly a good thing.

"A key element of AI to consider, though, is that it is programmed and trained by humans and, much like humans, can be defeated. AI-driven security will only work as well as it’s been taught to."

Speaking at a roundtable event in central London, Rick McElroy, security strategist for Carbon Black, said: "The community has said the biggest benefits are this: it augments human decision making. I 100 percent agree with that, it should absolutely allow you to make better decisions. And it learns your company’s security preferences. But here’s the biggest risk – it’s easy to bypass, so people are relying on things that are easy to bypass.

"False positives could cause you and your team hundreds of hours to go and figure out a false positive, only to end up with: ‘oh, we just wasted a week’s work on a false positive that never existed’."

According to the research, 70 percent of respondents felt that attackers are able to get past machine learning-driven security products, and a third of respondents claimed it was "easy" to do so.

See also: Machine learning in cybersecurity: what is it and what do you need to know?

Carbon Black recommends that security teams looking into using machine learning tools make sure they have the existing data to properly train the technology with. That includes a "massive body of baseline data, a torrent of detonation data, and statistics and comparisons among behaviours for validation" to generate the best patterns of malicious behaviour.

"I think the important thing to remember with AI is this," said McElroy. "It is a thing we’re all going to start using and will eventually put me out of a job. How far on the horizon that is, I have no idea. But today if you’re solely dependent on AI to make your security decisions you’re going to be in a bad way."

The report also found a dramatic increase in non-malware attacks since the start of 2016. Carbon Black noticed that almost every one of its customers had been targeted by a non-malware attack throughout 2016, which was part of the reasoning behind commissioning the report.

A non-malware attack is one that doesn’t place executables on the target endpoint but uses existing software, applications or authorised protocols to carry out the attack. Powershell,  a system administrator tool that is on every Windows box, is a good example.

"About five or six years ago at Black Hat some researchers said Powershell is going to be the thing and they wrote a tool to leverage Powershell attacks," McElroy said.

In 2016, these attacks evolved into the Powershell-based ransomware, Powerware. And the Squiblydoo attack was similarly built to wriggle past application whitelisting processes by exploiting existing system tools, where it is then able to run unapproved scripts.

Respondents told Carbon Black that they had seen some other particularly creative non-malware attacks, including efforts to affect a satellite transmission, impersonating the CSO while trying to access corporate intellectual property, and spoofing login systems so login information was immediately made available to the attacker.

"Spoofing logins to appear authentic – we call that living off the land," said McElroy. "The best thing I want to do as an attacker is look exactly like your system administrator, and if I can get that level of access I can do what I want for years and you’ll never detect me."

Some efforts to address non-malware attacks included providing employee awareness training, turning to next-generation antivirus, more of a focus on patching, and locking down personal device usage when appropriate.

Find your next job with computerworld UK jobs