Revelations from documents obtained by whistleblower Edward Snowden that GCHQ essentially downloads the entire Internet as it enters and leaves the UK, and stores big chunks of it, was bad enough. But last week we learned that the NSA has intentionally weakened just about every aspect of online encryption:
The agencies, the documents reveal, have adopted a battery of methods in their systematic and ongoing assault on what they see as one of the biggest threats to their ability to access huge swathes of internet traffic - "the use of ubiquitous encryption across the internet".
Those methods include covert measures to ensure NSA control over setting of international encryption standards, the use of supercomputers to break encryption with "brute force", and – the most closely guarded secret of all – collaboration with technology companies and internet service providers themselves.
It's that last point that I want to focus on here – the fact that computer companies are complicit in undermining the security we thought we were using to protect our privacy. I've already written about the way that Microsoft has been doing this through providing zero-day exploits to the NSA for it to use to break into corporate and government systems. Those are probably only short-term opportunities, since Microsoft does then go on to fix the bugs.
What we are dealing with now is much more serious. Providing zero-day vulnerabilities might be called sins of omission – failing to warn users that their systems are vulnerable. The latest information shows that companies are also committing sins of commission: allowing flaws to be built in to the purportedly secure products they sell. Here's what the Guardian article from last week goes on to say on this:
Among other things, the program is designed to "insert vulnerabilities into commercial encryption systems". These would be known to the NSA, but to no one else, including ordinary customers, who are tellingly referred to in the document as "adversaries".
"These design changes make the systems in question exploitable through Sigint collection with foreknowledge of the modification. To the consumer and other adversaries, however, the systems' security remains intact."
As the following paragraph makes clear, this close relationship is very much the jewel in the NSA's snooping crown:
A more general NSA classification guide reveals more detail on the agency's deep partnerships with industry, and its ability to modify products. It cautions analysts that two facts must remain top secret: that NSA makes modifications to commercial encryption software and devices "to make them exploitable", and that NSA "obtains cryptographic details of commercial cryptographic information security systems through industry relationships".
That is, the assumption must now be that every US security product has been undermined in this way (and probably many from other countries too, since the US is unlikely to be the only nation to engage in this practice.) That reinforces what I've written before: companies are really being negligent if they depend on commercial software, since the scope for industrial espionage is huge. Indeed, in the last couple of days we have discovered that the NSA itself is engaged in such espionage. What's troubling is that by placing backdoors in key security systems the NSA has also greatly increased the likelihood that other actors – both nations and criminal – are exploiting these to access confidential corporate information.
So, the obvious question then becomes: what about open source? Does it suffer from the same problems as closed-source software, or does its open collaborative development process prevent such backdoors from being hidden? It's obviously far too early to answer that question definitively, but here are a couple of cases that give us a sense of the issues involved.
First, a post from Theodore Ts'o, one of the most senior Linux hackers, and still making good decisions like this:
I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction. To quote from the article below [one of the recent reports on NSA's weakening of Internet security at www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html]:
"By this year, the Sigint Enabling Project had found ways inside some of the encryption chips that scramble information for businesses and governments, either by working with chipmakers to insert back doors...."
Relying solely on the hardware random number generator which is using an implementation sealed inside a chip which is impossible to audit is a BAD idea.
There then follows a fascinating and illuminating discussion thread about the dangers of closed source, hardware implementations, backdoors and the issues raised for open source – I recommend reading at least some of it to get an idea of the issues here.
On the other hand, here's an example where the open source process doesn't seem to have helped much. The story starts with Red Hat's implementation of elliptic curve cryptography (ECC) in the widely-used OpenSSL package. Back in 2007, a Bugzilla ticket was opened about certain features that had been disabled because of possible patent issues. That's bad enough, but as Dietrich Schmitz discovered as he explored the Red Hat story, it turns out that it's not just Red Hat's implementation that has problems. A post by the Google engineer Mike Hearn explains:
today I learned (via Gregory Maxwell) that the process for selecting the "random" curve parameters [for ECC] appears on the surface to be completely implausible. The parameters are the output of SHA1, which should be good if the seed was selected in a reproducible manner. But they were not. The seeds are extremely large constants with no explanations of where they came from. That smells very strongly of something that might be hacked.
It gets better. It turns out that these constants are not only unexplainable but were actually generated by an employee of the NSA. And it turns out that the IEEE working group that worked on standards for ECC was actually holding its meetings on the NSA campus and its membership therefore had to be approved by the NSA as well.
In other words, the NSA has been able to set a couple of key constants that almost certainly make it far easier for it to access data encrypted using this particular technique. What's worrying here is that nobody in the open source world seems to have worried about this in the way that Ts'o was worried to the point of refusing to follow a suggestion he was not happy with.
This reminds us that open source's much-vaunted (not least by me) transparency is all very well, but it depends on sceptical engineers using that openness to check and challenge design decisions. Now, it's true that the NSA's subversion took place within the standards working group, not during the implementation. But the moral is that henceforth the basic standards should be suspect, and that open source cannot depend even on nominally respected standards bodies to produce designs that do not contain backdoors.
Indeed, John Gilmore, co-founder of the EFF, has just written a fascinating post about some of the things he saw happening in the IETF committee drawing up the IPsec standard:
NSA employees participated throughout, and occupied leadership roles in the committee and among the editors of the documents
Every once in a while, someone not an NSA employee, but who had longstanding ties to NSA, would make a suggestion that reduced privacy or security, but which seemed to make sense when viewed by people who didn't know much about crypto.
The resulting standard was incredibly complicated — so complex that every real cryptographer who tried to analyze it threw up their hands and said, "We can't even begin to evaluate its security unless you simplify it radically".
So the good news is that open source is certainly better than closed-source code when it comes to backdoors, because it's much easier to find them – provided somebody is actually looking for them. The bad news is that the NSA has poisoned the security well at an even deeper level than we feared, corrupting the security standards themselves.
The implications of these new leaks are many. First, that all open source projects that implement security features should check their code to see if it has been tampered with in some way. That's unlikely, but by no means impossible, and surely something the NSA has tried.
Then, they should check the code history in case there have been any "helpful" contributions from the NSA or people who might be linked to them. Even in the absence of such "help", any unusual decisions should be queried. Checking the underlying standards is a larger task that will require open source engineers from different projects to pool their skills, but it must be done. Finally, open source projects need to draw up guidelines for future code development that will help avoid these deep problems in the first place.
I am aware that taken all together, this represents an immense task. But that reflects the level of the betrayal we have discovered here. The NSA has consciously ruined the entire architecture of online trust, purely out of a selfish desire to make its job of spying on everyone, everywhere, all the time, easier.
Whatever political fallout the revelations of that massive surveillance may have, and whatever constraints are imposed and promises made, the simple fact is that the open source community can never trust standards bodies or government participation in projects again. Instead, the free software world must depend only on its own community of engineers, and even then must question everything that they produce.
If that sounds like too much of burden, bear in mind this: given the complicity of commercial software companies in the NSA's global spying system, free software is the only hope we have of regaining a little privacy and security online. That may be a massive, intimidating responsibility for a ragtag group of volunteers to bear, but it's not one that can be avoided by anyone who cares about digital freedom.