I still regularly hear people asking about open source and security. The usual question goes along the lines of "surely if the source is out in the open bad guys can do bad things". The implication is that keeping the source code secret aids security and having it public degrades security.
Now, I'd not suggest that open source possesses some form of magic that always delivers more security - Alec Muffett recently debunked that idea here on CWUK. But I've two stories I watched unfold that support the assertion it can help make security better, in the context of a properly-functioning community.
The first dates back a few years. It's told in technical detail by one its central figures, Sun engineer Alan Hargreaves. In 2007, a really nasty exploit showed up in Solaris. It turned out that, under amazingly simple conditions, simply logging in to a Solaris system using Telnet - enabled by default - gave the intruder root access and thus the ability to do whatever evil at all was wanted.
The bug had existed since at least 2002, when a change to existing code made a small programming error (that had probably been there for ever) exploitable. Once the exploit was known, the fact the channels of communication were all open and public allowed a variety of experts to collaborate and create a fix for the defect in record time. Whereas a closed, proprietary environment would have ensured customers, hobbyists, field engineers and head-office programmers were siloed away from each other, open source did its job and the defect was patched rapidly.
In this case, many eyes did not detect the defect before an exploit was devised - indeed, who knows how many black-hats had been using it for years to gain access to Solaris. But the transparency and egalitarian ethos of the community got the problem resolved almost before anyone in the wider world knew there was a problem.
The second dates back longer, but is currently more topical. Recent discussion of a PHP denial-of-service attack involving a logic error converting a double-precision floating-point number into a text string uncovered an identical error in the Java libraries handling the
Double class. The defect allowed an attacker to hang a Java-based web server and thus mount a denial-of-service attack. As it turns out, the defect has been reported in a bug report in 2000, back in the days when the source code to Java was not open, but it had never been fixed - perhaps because it seemed esoteric without an accompanying exploit.
Once Java was open source, it had been found again, reported but for reasons that haven't been explained properly it was left unresolved in proprietary Java and in its counterpart OpenJDK, which is mainly maintained at arms length from the community by Sun/Oracle engineers. Once the problem was out in the open, members of the OpenJDK community again examined and supplied a fix for the defect. It's now in the process of being rolled in to the versions of Java most GNU/Linux distributions use, OpenJDK 6 and OpenJDK 7.
Many eyes did in fact detect the defect before an exploit was devised, but their input was neglected by the narrow channel through which their voices had to penetrate. But second time around the transparency of the community got the problem resolved almost before anyone in the wider world knew there was a problem.
Security and Community
Two old defects, both capable of damaging exploit. The first was fixed in-community as soon as it was noticed, the second took the community two attempts to persuade the corporate gatekeepers to attend to it. In both cases, having an open community and heeding it delivered dividends.
These are just examples. The world of open source is full of cases where openness of information and process allow properly-functioning open-by-rule communities to address security issues fast. This is the real meaning of the idea that open source is good for security; no magic, just symbiosis.
Find your next job with computerworld UK jobs