After the breach - how secure is RSA's SecurID?

The recent breach announced by RSA affecting their SecurID tokens raises stark questions on this authentication system. We have not been told many details so far, but let's look at what could be affected.Each type of RSA SecurID hardware token is...

Share

The recent breach announced by RSA affecting their SecurID tokens raises stark questions on this authentication system. We have not been told many details so far, but let's look at what could be affected.

Each type of RSA SecurID hardware token is identical in manufacture, apart from the unique printed serial number. It is then initialised with a secret ‘seed’ value, and a cryptographically protected copy of that seed value is sent to the token purchaser to install into their authentication server. An algorithm (based on AES in new devices) uses that seed value combined with the internal clock to generate the numbers displayed. Normally customers buy a large batch of tokens at one time, and receive a file containing that batch of seed values.

Software tokens are similar, except that one copy of the secret seed is installed into the software token, and another into the authentication server. The main difference from the hardware tokens is that the cryptography makes it very difficult for the customer to generate their own seeds, protecting RSA's revenue.

In both cases, once installed on the authentication server, most of the cryptographic protection of the seed values could be removed by anyone with sufficient time and effort to reverse engineer the code, and in fact the previous secret 64-bit algorithm was revealed about 10 years ago through such reverse engineering.

Now this means that, in principle, RSA has a copy of all the seed values linked to the token serial numbers. That means they could choose to offer a service to re-send seed values. It also means that anyone in the supply chain potentially has a chance to take a copy of those seed values. Naturally RSA has implemented a very secure handling process to protect those secrets, from the generation of the secret seed values to the transmission and storage of them. Presumably some point of this process has been compromised, maybe the seed values themselves, or maybe some other information.

How bad is this?

My blog on visibility describes a measure of how many people can see a piece of information.
In a perfect shared secret system such as SecurID, the Visibility index should be 0.3 - only two entities can see the secret seed information, namely the hardware token and the authentication server. If we allow seed recovery from the vendor, that corresponds to V = 0.5 (3 entities) in best case. Adding four system administrators and two backup systems may raise potential visibility to V= 0.95. Similarly, at RSA's end, at least as many people and systems might be involved in the production and transmission of seeds, say potential visibility V=1.2.

That was before the breach. If the seed values themselves have been compromised, potential visibility could easily go up to 3 - 5, greater if published more widely. If, however, the cryptographic protection or details of the algorithms used had been compromised, then visibility is not directly increased unless there is a leak of further information - and the fix for that is to tighten up customer security practises.

My inbox has been filling up with unsolicited sales information from RSA's competitors - and I suspect some of the systems suggested are still weaker than a fully compromised SecurID. If you are thinking of switching suppliers, you really need some objective way of comparing the security provided. I'd suggest two metrics - the visibility of the secrets in the whole process, as described above, and the ease of duplicating the secrets.

Passwords can be quite easily duplicated by shoulder-surfing or by keyloggers. SecurID-style hardware tokens cannot so easily be duplicated, though software tokens ought to be easier. Secure hardware such as TPMs or some cryptographic smartcards can be very difficult to duplicate cheaply, especially if the secrets used are self-generated by a good internal random number generator. An in the latter case, visibility may be improved by use of asymmetric (public key) crypto, as then there's no need to share the secret itself, just the public key - so V=0, hard to beat that!

Andrew Yeomans, Jericho Forum board member
 
 

Find your next job with computerworld UK jobs