Blogs

RSS FeedBlogs

Unscrewing Security

Alec Muffett

RSS FeedSubscribe to this blog
About Author
Alec Muffett

Alec Muffett is a veteran security geek who believes strongly in common sense, full disclosure, defence in depth, privacy, integrity, simplicity and open source. He is an independent consultant, writer, and speaker specialising in security education.

Certificate Authorities and SSL: building on cracked foundations

A hierarchical model of trust requires trustworthiness. Oops.

Article comments

SSL (strictly named SSL/TLS) is the encryption layer commonly used in HTTPS, IMAP, Instant Messaging and other common Internet protocols, and is supposed to provide at least three benefits to the developer and users.

In approximate order of importance these benefits would be:

  1. server identity to provide some surety that you're talking to the correct server.
  2. transport privacy, so that data moving over the network is safe from eavesdropping.
  3. client identity to (optionally) provide surety for the server that you are a known client.

Perhaps it seems curious to place the benefits in this order - transport privacy is usually at the top of people's lists - however privacy is pretty irrelevant if you've been tricked into talking to the wrong person, hence why server identity is more critical. Conversely client identity is usually achieved through authentication - usernames and passwords - rather than SSL, thus it's less critical.

There are numerous usability and security issues with SSL, not least because it's not so much a single technology as a bag of cryptographic features, functions and ciphers deployed in a framework that is meant to automate improving the security of a given network connection. Some security vulnerabilities are inherent to SSL in one or another versions; other issues are relevant to websites where the user connects using insecure HTTP but upgrades to SSL when logging-in or entering credit-card details for a purchase, so immensely plausible and apparently SSL-authenticated fake Amazon/EBay/whatever websites can be deployed by a fraudster located close to the victim.

The problem with benefits 1) and 3) above is that they require some concept of identity, and of the many ways that could be implemented the one upon which the web has mostly been built is the X.509 Digital Certificate.

Like a government-issued identity card, the point of a certificate is to prove that the server to which you are talking is really the genuine blogs.computerworlduk.com - but unlike an identity card these certificates operate on a chain of trust where your certificate is endorsed by someone else whose own certificate is endorsed by yet a higher authority... yielding a family-tree of trust relationships that lead to and is rooted amongst a small group of well-known, ubiquitously-trusted ancestors.

So you visit a website and it gives you certificate A, endorsed by B; you can look up B's certificate and see that it's endorsed by C, with C's certificate endorsed by D... and eventually you get up to Z, a copy of whose certificate is actually embedded in your browser along with a message saying "You can trust anything that has been endorsed by Z".

And it all works fine... except when it goes wrong somewhere in the bureaucratic middle layers, where perhaps K has endorsed a certificate without checking its underlying details and claims quite thoroughly enough. 

And that's exactly what's happened:

On March 15th, an HTTPS/TLS Certificate Authority was tricked into issuing fraudulent certificates that posed a dire risk to Internet security. Based on currently available information, the incident got close to - but was not quite - an Internet-wide security meltdown. As this post will explain, these events show why we urgently need to start reinforcing the system that is currently used to authenticate and identify secure websites and email systems.

Some Iranian hackers managed to trick the Comodo Certificate Authority into issuing bogus certificates for domains:

  • www.google.com
  • mail.google.com
  • login.yahoo.com
  • addons.mozilla.org - the site that provides plugins for Firefox
  • login.live.com

Possessing such certificates anyone can create websites which would appear worthy of SSL-guaranteed trust that yes, you are talking to the real mail.google.com; so users would surrender their username and passwords willingly and (eg:) be silently redirected to the real GMail, their credentials later being reused for criminal activity or government monitoring.

But having occurred the question is now what do we do about this?

If there are several bogus certificates J1, J2, J3, J4... which have all been endorsed by certificate K, then there are two obvious remedies:

  • issue multiple blacklist notices for J1, J2, J3, J4... etc, or...
  • issue a blacklist notice for K

This is the argument of the moment: blacklists (properly called Certificate Revocation Lists or CRLs, with an equivalent realtime lookup facility called OCSP) do exist, and are used to break the A-Z chain of trust that is described above.

But the open questions include:

  1. Not all applications / tools check the CRLs. What do we do about them? Shorten certificate lifetimes?
  2. For applications which do check CRLs, what should they do when the CRL server is down? Warn the user? Abort?
  3. Do you blacklist J1, J2, etc; or do you blacklist K for being untrustworthy and risk revoking valid certificates?
  4. Perhaps the whole model is too unwieldy for real trust?

Jacob Appelbaum has posted an extended analysis at the Tor Project blog - a must-read item for anyone with a desire for complete understanding - and he also links heavily to Adam Langley's research into how browsers behave when they can't check CRLs.

To quote Langley:

But both methods rely on the CA being available to answer CRL or OCSP queries. If a CA went down then it could take out huge sections of the web. Because of this, clients (and I'm thinking mainly of browsers) have historically been forgiving of an unavailable CA. [...] Imagine if Verisign corrupted their revocation database and were down for six hours while they rebuilt it. [A] global outage of large parts of the HTTPS web like that would seriously damage the image of web security to the point where sites would think twice about using HTTPS at all.

He has a point. At the start of this posting I listed, in plausible order, three desirable benefits of using SSL. But all of them are subservient to the ability to be able to communicate at all. To my mind this is a policy decision for each and every web user, and for each and every web server, to decide and enforce individually:

Is it more important to communicate at all, than to communicate securely?

Nobody can or should attempt to answer that question for everyone; of course there's always vanilla HTTP for "insecure" traffic, but since swapping in/out of HTTP/HTTPS has its problems as outlined above, perhaps HTTPS should be the ubiquitous standard? Not to mention the other protocols...

Thus SSL and its chains of certificate trust, need fixing.

ps: if you're a Firefox user and have reason to be paranoid, see Langley's posting regards how to enforce certificate checking in your browser.

Follow me as @alecmuffett on Twitter and this blog via the RSS feed.

[1806h update, changed the last paragraph]

Share:

Comments

Send to a friend

Email this article to a friend or colleague:


PLEASE NOTE: Your name is used only to let the recipient know who sent the story, and in case of transmission error. Both your name and the recipient's name and address will not be used for any other purpose.


We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

ComputerworldUK Knowledge Vault

ComputerworldUK
Share
x
Open