Words of wisdom: connect to public wireless points at your own risk, change your passwords every couple of months and avoid downloading software that is of unknown origin.
These are reasonable rules of thumb for online safety. Of course most people would snicker at such tedious tips. However, when was the last time you changed your Twitter password?
On the other hand when "connected to work" we are held to a pretty high standard. Our computer activities are watched by our employers’ device and network firewalls. Business owned applications are guarded by strong passwords and security monitoring is reinforced by a bevy of analytic algorithms that try to spot negative trends in user behaviour against a base-line "normal".
Everything seems to be under control in a Fortune 1000 organisation. Right?
So why is malicious software still seeping into corporate networks? Why is there a mad dash by organisations to prevent "data loss"? And why are mobile devices the new hot bed for Cyber insurgency?
Part of the answer lies in so called Advanced and Persistent Threats, or in shorthand: APT. In these, the source (’bad guy’) is geographically dispersed. The actors: state and non-state. The attacks: professional and deliberate. The result is a far superior adversary that makes it hard for us (‘good guys’) to predict the planning, execution and escape of an attack, as well as the consequences of these unmitigated attacks.
Richard Bejtlich, Director of Incident Response for General Electric neatly classifies an APT by the perpetrator's skills: "They do quality control on their code and have deep teams rich in expertise who are patient and determined to exfiltrate intellectual property and trade secrets.
Despite this level of organisation, their means of initial compromise are sometimes less sophisticated." A memory stick is left in the lobby of an office building. A naive employee finds the device and considers himself lucky for getting something for free. The memory stick is plugged into a laptop and unleashes its exploit code sending valuable and sensitive documents to a command and control centre. The US DoD just recently removed a ban on USB devices - after a serious breach back in November 2008.
No matter the security controls in place, it’s a sure thing that an APT will find a way in. Want to read about a gnarly and subversive APT? Look no further than the latest wiki-leaks reports about a vendor called HBGary and its plans for spy software that cannot be identified because it has no file name, no process or computer-readable structure that can be detected by scanning alone. It also has a made for Hollywood name: 12 Monkeys root-kit.
Not all is lost however....If we look at history we can see examples where regardless of the passage of time or introduction of new types of threats, a security paradigm was found (more or less) to keep the peace. Let's take a look at mobile cell phones. With over 4 billion mobile users worldwide, cell phone hijacking is pretty much a thing of the past. Or at least in the GSM industry which developed an international standard to positively identify each cell phone: the SIM card. Cable boxes are another example that posed a new attack surface. Today, cable boxes have a unique serial number that eliminates pirate services.
In both these examples, the security paradigm incorporated "base protection". The idea is to create a highly defended perimeter for sensitive hardware and "must work" software functions.
The trusted base is designed to reduce the attack surface to a point where you can literally measure the confidence you have that certain functions will work as advertised. A primary trait of an APT is to piggy-back over vulnerabilities that no one really knows about. If there was a 0-day exploit lying around (a threat that tries to attack vulnerabilities that are unknown to others or the software developer), it would not be possible to take advantage of the situation because any local changes are forbidden.
Organisations can then decide to allow only “known” computers and software to connect to a sensitive network. There would be a means to ascertain that nothing in the trusted base has been tampered be it the chip sets, network cards, operating system and applications.
Microsoft is pushing for a broader program they nickname a “global Internet health model” that includes a provision wherein if a device is known to be a danger it is quarantined. A clean computer would live-up to that status by being ‘unchangeable'-by-default. Any trace of exploit software that tries to implant itself on the target computer would not be tolerated -- not unlike an immune system - the trusted base would react violently.
If you could ensure integrity of the foundation, then an organisation can put in place credible process isolation strategies for sensitive applications. Think of a house where you know "for sure" a thief cannot tunnel underneath.
Security administrators can deploy compartmentalisation policies on an activity by activity basis. For example, web browsing would be compartmentalised from all business work. All software that is downloaded is trapped in a quarantined area or sandbox. Any attempt to move software that has not been a-priori classified as "white-listed" would be filtered by the trusted computing base. Users can of course still be conned by a fraudulent web sites that remotely capture credentials. So while helpful, let's just say trusted computing is not a panacea.
The principles and techniques behind trusted computing have been talked about for decades and encompass a variety of techniques such as hardware security modules, integrity measurements, white-listing software and code obfuscation.
The trusted computing base (TCB) of a computer system is central to this as it makes up all the components that are critical to its security; the hardware, firmware, and software. Bugs occurring inside the TCB might jeopardise the security properties of the entire system." Incidentally, the smaller the TCB, the better -- although there are means to stretch the TCB far and wide to cover an entire system.
The techniques of trusted computing design are not yet in popular mainstream computer and security architecture and it’s going to be up-to computer and security architects to incorporate key elements of trusted computing into next generation product and systems designs. The challenge will be to arrive at these next generation designs that cost effectively “lower the value to the would-be evil doer and “raise the cost to the would-be evil doer".
Walid Negm, Director Cloud and Cyber Security Offerings, Accenture.