How a Screwdriver teaches us something fundamental about Security

If you were paying attention last week you should have been reading "Reducing Systemic Cybersecurity Risk" by Ian Brown (not him) at OII and Peter Sommer at LSE. This 1.5Mb, 136-page epic PDF got splashed somewhat, mostly for its defanging of the...

Share

If you were paying attention last week you should have been reading "Reducing Systemic Cybersecurity Risk" by Ian Brown (not him) at OII and Peter Sommer at LSE. This 1.5Mb, 136-page epic PDF got splashed somewhat, mostly for its defanging of the military cyber mythos. The paper benefits from careful reading - several sections feature sidebars and final paragraphs which feel as if they were bolted-on in the editing phase; they hang slightly disconnectedly and make points which I wish had been made fuller, higher-up, in greater depth and in a couple of cases in bold text.

The paragraphs on Problems of Definition and Problems of Estimating Loss are must-reads for everyone connected to security; however from my perspective the paper was not last week's most educational security story. Instead, I offer:

  • Step 1: Apple adopt obscure new pentalobe screw
  • Step 2: Price of pentalobe screwdrivers drops like a stone, free workarounds are published
  • Step 3: Loud guffaws

Lacking mechanical benefit surely the only reason for adopting pentalobe could have been sheer bloodymindedness on the part of Apple? There is surely no point in such a manoeuvre - especially today, with e-commerce and the communication-flattening of the Web, anyone who wants to get into their iPhone or MacBook, will do so. There will be no stopping them because they won't be doing this casually, and the global marketplace destroys the necessary unavailability of case-cracking tools and thus the point of this nonstandardness.

Presumably someone at Apple was hoping that the obscurity of the pentalobe driver would add to the service lock-in physical integrity of the platform; this would be bad security thinking. The philosophy is called Security Through Obscurity (STO). For years I've told people that STO does not work, but this is wrong - it does work, but weakly, and the mindset behind it is really dangerous in the wrong context.

The way that any defence mechanism works is by imposing extra cost upon an attacker - extra time, money, or effort is needed for the attacker to achieve his goal. Means to achieve this extra cost lie upon a spectrum of possibility from the stereotypically easy guess that the house keys are under the doormat to the near-impossible guess the 256-bit AES key that was used to encrypt this particular document and no other.

The Wikipedia entry has a long philosophical diatribe about what constitutes STO, but in some sense it's actually all the same: In the former the means of access (door key) is accessible to anyone who can find or stumble across it. Ditto the latter. There's a needle in a haystack quality about both problems, the only difference being that we can be absolutely certain that a crypto-key exists amongst a field of around 116 thousand million million million million million million million million million million million million potentials, whereas the door-key may be nowhere at all.

Uncertainty that there is a key is possible in STO, but then uncertainty is the only cost-elevating defence that STO provides; whereas (e.g.) cryptography vastly, enormously elevates the computational cost required to achieve a most definite and certain "break".

Having proven that it slightly works, you can see why Security Through Obscurity is so appealing in its various guises:

  • ...nobody will know that the service runs on this port number...
  • ...you can't reverse engineer this cookie, it has a secret checksum...
  • ...the database password is embedded in the PHP script...
  • ...it's closed source code, nobody can find weaknesses in it.

...because it's cheap! If you've created something that you feel is enough of a nightmare your hubris will lead you to feel that nobody else will be as clever as you are, and thereby you will be safer.

But when you really care about it, from which end of the spectrum would you prefer to take your security technologies: supposed unobviousness, or an infeasible challenge to the attacker? And what happens when this is magnified by need to know thinking, such as we've hardcoded an administration password into the network switches we sell, but only the tier-3 support staff will be told it?

That's when things get really worrying. Maybe the next post should be on the myth of the Trusted Computing Base?

Follow me as @alecmuffett on Twitter and this blog via the RSS feed.

Find your next job with computerworld UK jobs