When the attack dust settles - impact analysis,

Look, we all know the bad guys are out there, maybe it’s a cyber mafia, a nation state with a cyber army, or maybe it’s just some disillusioned ideologue with mad hacking skillz. Sooner or later we’re all going to be attacked,...

Share

Look, we all know the bad guys are out there, maybe it’s a cyber mafia, a nation state with a cyber army, or maybe it’s just some disillusioned ideologue with mad hacking skillz.

Sooner or later we’re all going to be attacked, and many of us will suffer a compromise. No security protection method is perfect, and sometimes the black hats win. What has me scratching my head is that, in the wake of all the publicly announced compromises, there is never a definitive impact statement.

It’s great that there’s legislation that mandates that organisations inform consumers about compromises, but it’s not enough for me to make a decision in the wake of an incident.

What should I do when Google or RSA issues a release that they detected an APT? Is it time to stop using their products or should I go forth blindly and trust that the integrity of their offerings is still intact and I’m not at risk? I understand that most companies are reticent (euphemism) to reveal the extent of a successful exploit, but there are cases where to withhold information—if it’s available—causes grave harm to their customers.

Their consumers, including other companies, rely on the technology and security controls they provide. In the case of Google, customer data may be directly exposed to the attacker, and calls into question trust in cloud services.

In the case of technology providers, the risk is transitive: customers install the technology and trust it’s integrity to run their systems, safeguard their transactions, and even provide the infrastructure and security of their enterprises.

As I’ve speculated before, who were the other 20+ companies that were compromised in Operation Aurora in addition to Google and Adobe? And should we continue to trust their software to run our corporate operations, and even our nation? What if one of those companies was Microsoft? Given the pedigree of the other targets, it’s not far-fetched.

Imagine that the attacker wasn’t there to steal data but to plant a piece of code to monitor all systems that run Windows and report back to an enemy state. Add bot ability, controlled by that same enemy, and you have the one situation no one wants to believe can happen: a pervasive capability to launch a concerted attack on multiple targets.

We know the energy grid isn’t a monolithic entity that’s easily attacked by one or a few points, nor is easily susceptible to a cascading failure due to attacks on certain key points; but imagine a coordinated attack on virtually all utilities that use Windows to run their SCADA systems.

The point is we, the consumers, have a right to know the impact of a compromise. And as companies, we have the responsibility to implement technology and processes to assess that impact if we’re compromised. Security intelligence provides the early detection to take the P out of APTs, and provides the security visibility that’s critical to assess the scope of an attack and compromise.

Centralised logging is a mandate in practically all security regulations, contractual obligations like PCI, and frameworks, so why isn’t it ubiquitous or well enough implemented to provide total visibility, post exploit? There are a few reasons:

1.. Logging is not enough. The first thing attackers do once they compromise an asset is disable logging, the cyber equivalent of spray painting the lens of the security camera black. Which is why real-time network activity monitoring is critical: attackers need connectivity to conduct their operations, whether it’s an automated exploit or controlled by an organic master. Gathering network activity provides total visibility, nowhere to hide. And it’s virtually free: any enterprise network infrastructure has at least NetFlow capability. (And note today’s EMC announced acquisition of NetWitness to begin to address this, but years behind Q1 Labs. Sorry, had to say it.)

2.. The tool is inadequate. Even if the data is all there, if the log management application or SIEM doesn’t provide easy pivoting of data, it can be as frustrating to get a good sense of what occurred as digging through a dumpster. Or perhaps the tool forces selective log collection as a limitation of its collection speed or storage capacity. Not having access to application data from network monitoring can also hamper forensic investigation.

3.. The solution isn’t fully deployed. First generation SIEM has earned the reputation of being difficult to implement and tune, and has suffered the fate of perpetual deployment. Sometimes it’s because the reason for buying SIEM or log management was only tactical, and the deployment ends at centralised collection to satisfy compliance mandates. Without a fully planned, deployed, and tested solution, a SIEM may be like a smoke detector with dead batteries.

Doesn’t it make more sense for companies who’ve suffered a compromise to come out and reveal the details of how they detected the attack and boast about how they were prepared and able to track down the activities of the malefactor and reassure us that we as consumers are still safe?

So when they’re circumspect about the impact, I have to assume that the best case scenario is they have inadequate security visibility, meaning zero security intelligence. I’ll leave it to you figure out what the worst case alternative is.

Post by Chris Poulin, CTO, Q1 Labs

Find your next job with computerworld UK jobs