Backdoors in computing equipment are the stuff of legend. A decade ago a security expert informed me with absolute certainty that a prominent non-US networking company had designed them into its products for years as a matter of course as if nobody much cared about this fact. Long before the average citizen had heard the letters NSA, it struck me at the time as extraordinary suggestion. It was almost as if the deliberate compromise of an important piece of network equipment was a harmless novelty.
One assumed he was exaggerating but by the time Edward Snowden became a household were such stories suddenly sounded plausible, even obvious. Of course company X had inserted backdoors into its kit – everyone had.
Today, there are only a handful of designed backdoors that aren’t careless simply security flaws but there is also the fear of them which seems to matter even more. With every passing year, more are being documented in a slow-motion scandal that looks as if it has a way to run. As paranoia runs free, the more backdoors there are the more backdoors are needed to counter them. There have even been backdoors in the backdoors, indeed this is probably the first response of some spooks to finding one where it is not wanted.
The 7 security backdoors that heped kill faith in security - But what is a backdoor?
For a start, a true backdoor is not something put there after the fact and should be – deliberately or accidentally – a consequence of a system's design. That ostensibly counts out malware such as Trojans which open backdoors by exploiting vulnerabilities but includes serious design flaws that were discovered after a system was shipped. Some will complain that is a grey area and they’re right. The line between deliberate and convenient is not always clear. Any undocumented function that gives system power becomes a backdoor by definition.
Backdoors don’t even have to be a secret - witness recent US and UK Government demands that they be placed in systems to allow what is described as legitimate police and intelligence access to counter criminality. It’s also far from a new argument. In the early 1990s, the NSA proposed something called the Clipper Chip, a hardwired backdoor based on storing encryption keys in an escrow accessible to the Agency. The idea eventually went nowhere for the simple reason that almost nobody would want to use a system whose security depended on the honesty of the US Government or, for that matter, any government.
This is the informal first law of backdoors; once a backdoor becomes known, it leaks its power pretty rapidly. The backdoor only works as long as its existence is unknown or a matter of plausible deniability.
What can be inferred from the trickle of revelations such as Edward Snowden is that noteworthy – BIG - backdoors are not only more common than was previously thought but might even be in almost everything. To assume otherwise requires quite a leap of faith. Many of these are not used to spy on the general public and are part of the geo-politics of spying between governments but it implies a world in which absolute security has become moot.
In the 1950s, Americans worried about a world in which J Edgar Hoover was on the wiretap. He merely kicked off a world that is still forming.
NSA Clipper Chip, 1993
The most reviled backdoor in history, the NSA’s infamous Clipper chip, endorsed by the Clinton administration, still gets people’s backs up more than two decades on from its heyday. In 1993, encryption was new and strange. Few used it but the experts and Government spooks could, however, imagine a world in which they might. Their answer was to neuter the possibility of unbreakable security with an escrow-based system based around the Clipper chip that would cache keys. Assuming anyone had agreed to use it the NSA would have had a ready means to decrypt any content.
As Whitfield Diffie, creator of the famous Diffie-Hellman key exchange protocol observed at the time, the problem with building in backdoors is that they are deliberate weaknesses. Should a third-party find them they become less a backdoor than an open one.
Borland InterBase backdoor, 2001
This weakness in the firm’s InterBase database was essentially a secret backdoor account that allowed anyone with knowledge of it access to data. Making the serious comic, the username and password in question were ‘politically’ and ‘correct’. At the time, the assessment was that while deliberate the hole was probably put there by one or a small number of programmers as a convenience. But we’ve included it because the fact that perhaps only one person knew about it doesn’t mitigate its seriousness for the seven years until it was discovered.
Huawei v the US, 2011
The huge Chinese equipment maker spent millions trying to reform its image after being accused of building backdoors into its telecoms equipment. In 2012 a US Congressional investigation concluded that the firm (and mobile vendor ZTE) should be banned from the world’s largest market over state surveillance worries. In the UK BT had been installing Huawei equipment since 2007 so it was all too late to do much about it beyond GCHQ setting up a special unit to monitor its systems in cooperation with the company itself.
Irony or all ironies, a Snowden leak then suggested that the NSA’s Tailored Access Operations (TAO) had set up an operation to spy on Huawei to work out how far any collusion went.
The modern (i.e. post-Aurora and Stuxnet era of backdoor scandal began here.
Cisco et al, 2013
Dragged out of Snowden’s famous cache by a German newspaper, this concerned unpublished security flaws in the networking equipment of a group of vendors, headed by Cisco but including Juniper, Samsung among others. These weren’t classic backdoors except in the sense that they allegedly offered a huge amount of surveillance control over the equipment. Very unusually, Cisco’s CSO John Stewart issued a statement denying any knowledge of the compromise.
“As we have stated prior, and communicated to Der Spiegel, we do not work with any government to weaken our products for exploitation, nor to implement any so-called security ‘back doors’ in our products,” he stated. The fact he was even having to say this was a sign of changed times.
More recently in 2015, a backdoor compromise called SYNful Knock was discovered on Cisco equipment. Described by security fir FireEye as a Cisco router implant, already it was clear that the simple idea of intelligence engineers building in massive holes from day one of a product’s life was probably out of date. Why build them in when juicy ones could be found later on?
Discovered just before Christmas 2015, this looked like a biggie in Juniper’s NetScreen ScreenOS from the off. The company finally admitted to suspicious researchers that the Dual_EC_DRBG encryption random number generator contained a backdoor that would allow anyone with knowledge of it to eavesdrop on secure VPN connections. This flaw might or might not have been deliberately put there by the NSA, which was he source of the RNG, but it was exploited at some point, possibly by a third-party government. A backdoor in a backdoor or just weak coding?
Hard-coded passwords are an absolute no-go for any system these days so it was disconcerting to discover that Fortinet appeared to have one in an SSH interface accessing its FortiOS firewall platform. Researchers looked on this as a backdoor although Fortinet strenuously denied this interpretation. In fairness, this was probably correct although the lack of transparency still bothers some.
CESG's MIKEY-SAKKE, 2016
Was the revelation that this protocol, promoted by the UKs CESG for end-to-end encryption in VoIP phone calls, a real backdoor or simply part of the spec? According to Dr Steven Murdoch of University College London the escrow architecture used with MIKEY-SAKKE simply has not been fully explained. Was this a way to spy on conversations without anyone knowing? According to GCHQ, that’s exactly what it was. As an enterprise product, escrow was perfectly appropriate and organisations deploying this technology needed a system of oversight.
In fairness to MIKEY-SAKKE setting up end-to-end encryption without some form of backdoor is now unthinkable for large enterprises that need control over their encryption infrastructure. Whether this compromises the system in a wider sense seems over-blown assuming the architecture has been correctly documented.