Proprietary software vs security


It's been a long time since I've had to install a piece of proprietary software because generally my needs are met entirely by Debian's packages or at very least by tools distributed as source.

Recently though I needed to temporarily install something for interoperability reasons in order to extract some information from an opaque blob of data.

The world has come on a long way since I last did this and the vendor in question had a Linux version of their software. So far so good. Unfortunately the install instructions for this piece of software were, to paraphrase, "download this binary and run it with root privileges, following the on screen instructions."

I beg your pardon? You expect me to take some piece of code with no explanation of what it is going to do and run it, not just with access to my own files but complete unrestricted access to my entire system? Are you mad?

Or more to the point, do you think I'm mad?

Now I'm not really one for grand conspiracy theories. Yes, there's some possibility that the company's website has been compromised, that there are evil bits of malware in this designed to steal my data or bandwidth and send them back to some cackling madman sitting in a converted volcano. It's a possibility but it's rather unlikely1.

To be honest what bothers me more is a simple question of engineering competence. I've spent too much time clearing up after pieces of software that think they know best making low-level alterations to systems which then cause problems.

I'm afraid I don't buy the "Trust us – we're professional programmers," line. If you want to have complete and unfettered access over my computer and data then I expect to be told exactly what you're going to do. I want to have some meaningful assurance that you know what you're doing and that unbiased, clued-up people agree. That's just common sense, isn't it?

But this "You don't need to know what it does, just do as you're told and trust us," approach is the standard model for software delivery in the proprietary world. Microsoft themselves are fighting a losing battle trying to gain some sort of reputation for reliability and security because by design every piece of software you install has to have complete freedom to do goodness knows what to the internals of your system and no-one but the upstream developer has any idea what that is or how well designed it was.

I'll say that again in case it wasn't clear: a large part of Microsoft's reputation for reliability problems is not the direct fault of Microsoft's programmers, but rather the direct result of the proprietary business model.
Even if they had the best programmers in the world working on Windows they would still suffer from these problems.

Why? Because their software model makes it common practice – necessary, even – for people to run arbitrary programs with full access to their computer and no visibility of what they're actually doing.

I don't expect every user to read the source code to everything they use, but in the spirit of "sunlight is the best disinfectant" I have some level of confidence that a widely used system with publicly available source code is going to have attracted scrutiny of the sort that closed, binary-only software prevents. To me, that's the very least I'd expect before handing over the keys to my personal data.

1. Of course these unpleasant bits of software do exist, albeit on a smaller scale. There are even various well known programs out there which, somewhere in the depths of the small print, explain to you in obfuscated language that they're going to use your network bandwidth for their own purposes, or to send details of your browsing activities to them for "quality assurance purposes."

"Recommended For You"

Microsoft and child protection There's No FUD Like an Old FUD