The security expert Bruce Schneier is not one much given to hyperbole. So when he writes in connection with the newly-discovered Heartbleed flaw:
“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.
Then, you know it’s serious.
Mark McLoughlin has put together a good round-up of key Heartbleed links, while Mathew Ingram has a more discursive introduction. It was discovered independently by an engineer from Google, Neel Mehta, and the Finnish security firm Codenomicon: there’s an interesting write-up of how the latter did it. And there’s an even more intriguing interview with the poor chap who introduced the bug. He’s adamant it was just a (rather serious) oversight:
Mr Seggelmann, of MÃ¼nster in Germany, said the bug which introduced the flaw was “unfortunately” missed by him and a reviewer when it was introduced into the open source OpenSSL encryption protocol over two years ago.
“I was working on improving OpenSSL and submitted numerous bug fixes and added new features,” he said.
"In one of the new features, unfortunately, I missed validating a variable containing a length."
After he submitted the code, a reviewer “apparently also didn’t notice the missing validation”, Mr Seggelmann said, “so the error made its way from the development branch into the released version.” Logs show that reviewer was Dr Stephen Henson.
Mr Seggelmann said the error he introduced was “quite trivial”, but acknowledged that its impact was “severe”.
Statement: NSA was not aware of the recently identified Heartbleed vulnerability until it was made public.
Although I might give Mr Seggelmann the benefit of the doubt, the NSA’s track record for veracity in the wake of Edward Snowden’s astonishing leaks is not been of the best, and I am not inclined to do the same for them. But that’s another article. Here I want to concentrate on what is perhaps the most interesting facet of this story for readers of this column: the fact that the OpenSSL code suffering from Heartbleed is open source.
Sam Tuke has a useful blog post entitled “What Heartbleed means for Free Software” where he makes a number of good points about how open source actually did quite well here:
Because Free Software makes its source code available to independent audit and review, such bugs are more likely to be found in important apps like OpenSSL. And because high profile Free Software projects are more apt to use automated code testing tools and bug detecting equipment, such bugs are more likely to be blocked from introduction in the first place.
Heartbleed proves that software has bugs, and that Free Software is no exception.
That’s pretty standard, as are the next two points:
The Codenomicon Security researchers who discovered the bug notified the OpenSSL team days before making the vulnerability public. The problem was fixed, with updates available for the most important Gnu/Linux servers, before the news even broke, as is the custom with security critical issues. Therefore the fix was extremely fast. Compare that to track records of leading proprietary software companies.
Heartbleed’s discovery took place during review of source code that wouldn’t have been possible had OpenSSL been proprietary. Vulnerabilities can be found with or without source code access, but the chances that they’ll be identified by good guys and reported, and not by bad guys who’ll exploit them, are higher when independent auditing of the code is made possible.
Heartbleed demonstrates that Free Software encourages independent review that gets problems fixed.
His most provocative point is the following:
Despite the understandable consternation surrounding heartbleed’s discovery, its impact is actually very encouraging. The shock resulting from the flaw reflects how widely OpenSSL is relied upon for mission critical security, and how well it serves us the rest of the time. 66% Of all webservers have OpenSSL installed via Apache or Nginxaccording to Netcraft. The list of top shelf security apps using OpenSSL in the back-end is a long one, including PHP (running on 39% of all servers). The fact that heartbleed has become front page news is a good thing for raising public awareness of the ubiquity of Free Software crypto across business sectors, and for reminding us how much we take its silent and steady protection for granted.
Heartbleed exposes Free Software’s importance and historical trustworthiness to a global audience.
That’s a nice “there's no such thing as bad publicity” angle: the magnitude of the problem is a stunning demonstration of just how successful open source has become. However, leaving aside that positive spin, we do need to confront the fact that open source has introduced that “11 on a scale of 1 to 10” serious flaw into large swathes of the Internet. Does that mean that the much-touted open source methodology – that "given enough eyeballs, all bugs are shallow" - is simply not true. The inimitable Eric Raymond responds thus:
I actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Hearbeat bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.
The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.
That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.
A great answer, but it still leaves us with the question: how did this particular bug slip through, even if many others were caught thanks to those many eyeballs, and even if closed source is probably far worse? The answer comes from Steve Marquess, a key member of the OpenSSL Software Foundation (OSF) :
Lacking any other significant source of revenue, we get most of ours the hard way: we earn it via commercial “work-for-hire” contracts. The customer wants something related to OpenSSL, realizes that the people who wrote it are highly qualified to do it, and hires one or more of us to make it happen. For the OpenSSL team members not having any other employment or day job such contract work is their only non-trivial source of income.
At the moment OSF has about a hundred grand in open contracts — these are executed contracts with purchase orders, not just contracts in discussion or negotiation — that aren’t being worked because no one in this very small “workforce” of qualified OpenSSL developers is available to work on them. Even though they could make good money moonlighting they tend to their other responsibilities first: day job, family, OpenSSL itself. I’ve had prospective clients call me and beg for Stephen Henson to look at their problem. I have standing instructions from one client to please let them know if Andy Polyakov ever has any free time. I’ve had clients ask “would more money help”? Some queries I just turn down right away with “sorry, we’re unable to help”.
Even when we can staff a commercial contract, it can’t be rushed or skimped; these guys are just too used to taking pride in their work no matter what it is. Having worked for decades in industry and government I know that “good enough” and “quick and dirty” are the norm, so for some of the contract work I’ve tried encouraging a pragmatic “get ‘er done” attitude. They won’t do it; nothing less than the very best work they are capable of will do.
So part of the problem is really the hacker ethic of pride in your work, and never producing anything less than the best. But the other side of things is undoubtedly a question of resources, as John Naughton explains:
huge online companies, instead of developing their own SSL code, simply lifted the OpenSSL code and just bundled it into their web-service software. They are perfectly entitled to do this, provided that they adhere to the terms of open-source licensing. But in behaving as they did they have in effect been free-riding on the public domain.
Most open-source software – and Open SSL is no exception – is produced voluntarily by people who are not paid for creating it. They do it for love, professional pride or as a way of demonstrating technical virtuosity. And mostly they do it in their spare time. Responsible corporate use of open-source software should therefore involve some measure of reciprocity: a corporation that benefits hugely from such software ought to put something back, either in the form of financial support for a particular open-source project, or – better still – by encouraging its own software people to contribute to the project.
The problem is not that people can free-ride on open source – that’s been happening right from the start, and is quite acceptable – but that people are free-riding on open source for code that is playing a uniquely-important role in the running of the secure Internet. What is irresponsible is that large, wealthy companies not only took the OpenSSL code to use for their own purposes, but that they didn’t even think about supporting it for their own good.
As we have just seen, one apparently tiny error is causing massive problems for just about everyone, including all those rich companies. If they had simply helped the OpenSSL project with a degree of support commensurate to the importance of the program, and commensurate with their own dependence on it, we would probably have avoided the current issue.
Since it is evident that even nominally intelligent people running these companies can’t or won’t see that, and have preferred just to take and hope, we need to find a way to encourage them – forcefully – to accept their fair share of the burden. But how might we do that? Tomorrow I’ll outline an intriguing solution proposed by someone who knows what he is talking about here (not me, obviously....)
Find your next job with computerworld UK jobs