As you may have noticed, a lot of software has a lot of bugs. Even open source code has them, but the main damage tends to come from certain well-known, widely-used proprietary programs - not forgetting well-known, widely-used open source programs with proprietary layers like Android. In fact, some estimates put the annual damage caused by serious software flaws in the hundreds of billions of pounds range, which probably means that many trillions of pounds' value has been destroyed thanks to buggy, flawed software over the years.
And yet nothing happens. People go on using this stuff, people go on losing large amounts of money, time, personal data - everything - but they go on using the stuff. It's bizarre that most people accept blue screens of death and a menagerie of serious malware as if they were up there with death and taxes in terms of inevitability and unavoidability. There is one important consequence of this: the companies that make the software with these huge flaws go on making it with huge flaws. Because there is almost no downside.
There's little reputational downside, because ill-informed media outlets - notably the BBC - for years simply omitted to mention that we're generally talking about Windows programs here, as if the problems were inherent in all computers. But even nowadays, when there may be the odd reference to what platform the malware is feeding off - although, oddly enough, it's often Android that is mentioned here, rather than Windows - there is still no financial downside to causing this mayhem.
This has led many people to suggest that we need a liability law for software, so that when some code is responsible for financial losses, whether directly through money being siphoned off somehow, or indirectly through the costs of compensating irate customers and cleaning up the mess, the software company responsible can be sued for that loss. The logic is that once the big software companies have been sued for a few million - or even billion - once or twice, this will concentrate their minds wonderfully, and they will start to make their software as secure as it can be, unlike at present when "good enough" is, well, good enough.
But, of course, there's a problem here: where does this leave open source? As I noted above, open source has bugs too, and probably leads to financial losses from time to time. What happens when users start suing open source companies, which are generally smaller than proprietary ones? What happens when there isn't even a company they can sue - and they try to sue the relevant developers instead?
Eleven years ago I interviewed the security guru Bruce Schneier, who had advocated such a liability scheme (at least he did then, I'm not sure about now). So I asked him about this point:
Q. If those writing software became liable for its faults, as you suggest, what would be the situation for open source software?
A. I don't know. I presume there would be some exemption for open source, just as the United States has a "good Samaritan" law protecting doctors who help strangers in dire need. Companies could also make a business wrapping liability protection around open source software and selling it, much as companies like Red Hat wrap customer support around open source software.
If even Schneier couldn't come up with anything better than an exemption, it seemed clear that liability was not the way to go. At least, that's what I thought until I read this piece by Walter van Holst over on medium.com. He proposes introducing liability for software, but with a key difference from traditional schemes:
liability is not a goal in itself, but a means to create accountability for issues in software. And when accounting becomes the goal, it is much easier to prevent unintended consequences by focusing on transparency. For example through taking into account to what extent security of products can be assessed by downstream recipients, security warnings have been given to downstream recipients and issues can be fixed by downstream recipients. So the extent to which strict liability can be attributed should be a function of a) source code availability and b) vulnerability disclosure and patch availability. And since such a function is by definition qualitative and and not quantitative, its curve must be fuzzy.
The appearance of "source code availability", "vulnerability disclosure" and "patch availability" gives a hint of where this is going. Here's how it works for closed-source code:
if the source code is proprietary (no source code available and no right to change it) and a vendor does not disclose vulnerabilities it has been made aware of, a strict liability model would come into play.
A dependency on disclosure of vulnerabilities in order to be able to exonerate oneself from product liability would be more a matter of private law.
However, the capability to exonerate oneself should only come into play for vendors of products that are fully proprietary, so not even auditable, after they have provided fixes for known vulnerabilities. This is because disclosing a vulnerability without allowing or providing the means to fix it should not fully disculpate one from responsibility. Also one disclosure is not another. Merely informing that there is a problem may not be due diligence, informing users about the exact nature of the problem and providing suggestions for mitigating measures more likely is.
And here is how open source would fare:
This type of software is both auditable and fixable, since not only the source code is available but the license also allows for changing it. Vulnerabilities are commonly disclosed after patches redressing them have been made available. Even without taking into account additional factors, such as that it often is made available for free and that the relation between producer and user is sufficiently nebulous that damage is rarely foreseeable, it is clear that there already is a sufficiently high level of accountability that introduction of strict liabilities may be unnecessary or even counter-productive. The lack of product liability (all open source licenses exonerate product liability to the fullest extent possible) clearly has done no harm in this regard.
It's a clever idea that seems to create exactly the right kind of pressure to improve code for exactly the groups that need it by focussing on an aspect that has hitherto been rather overlooked - accountability. I hope others with more legal knowledge than I have take a look at it to see whether it's something that might actually work in practice.