Share

Bruce Schneier, BT CounterPane CEO, is probably wrong to say that software companies should be made legally liable for the security flaws in their products, as attractive as that idea might sound to the many victims of sloppy code.

But he is dead right in one respect - the industry does need a better way of feeding back pain from the mostly suffering users to mostly rich creators that isn’t there right now.

Schneier’s fundamental point is hard to dispute, namely that a large chunk of what the security industry has turned into is really an expensive means of protecting ourselves from the fact of bad – that is to say insecure – software creation. In no other area of commerce are the producers allowed to be so unaccountable for the mistakes for which they are responsible.

There are a number of pragmatic arguments against Schenier’s position, including the inhibiting effect it might have on software creation given that software is so complex is it almost impossible for any producer to guarantee their products do not have some flaws. And this is before the costs of liability protection are assessed. It is not as if public liability has exactly worked perfectly in US law, and in some fields (for example healthcare) has worked to increase base costs so far as to simply enrich the legal profession without, arguably, increasing real protection at all.

If the risk of creating a flawed product or service for which the producer might be held responsible simply becomes another risk that can be balanced by buying insurance, what has actually changed? Software would get more expensive, the insurance salesmen would make more commission, and while there might be fewer security holes they would never come close to disappearing. Security protection would still be necessary because many security issues are to do with the way security products are used, or misused, and not just made.

There is an alternative that is worth trying first, and that is called disclosure. Don’t make software companies liable for bad code, make their customer businesses responsible for bad *outcomes*. In other words, don’t punish the people who inadvertently create the holes, punish the people who allow those holes to be exploited to the detriment of the population.

If bad code leads to data insecurity – the best measure of a negative security outcome – then punish the companies that allowed this to happen by insisting they tell the world in great detail. Fine them, humiliate them, make them name and shame the vendors they hold responsible for the failure. If necessary, hold their boards accountable for disclosures that lead to losses by their customers, whether these were caused by sloppy practices or holes in software They, in turn, will feed back some of this pain to the companies that sold them the software in the first place, either by not buying their products, or by suing them.

The world is moving in this direction as a number of US states now have enacted laws that at least insist in the compulsory disclosure of data breaches. It’s early days, but we can see that this is starting to have some effect. People are losing their jobs because of sloppy security, pain is being fed back to those whose job should be to protect the public.

Public exposure

There is today no reason why ordinary members of the public can’t sue these companies if their data has been leaked or abused. Does it matter whether software companies suffer directly? If they haven’t yet they will slowly start to feel the uncomfortable gravity of these laws eventually.

It is still early days, and this feedback mechanism is far from perfect. Many countries that should know better sit on their hands and do nothing – the UK for instance – because the companies that would he held accountable have persuaded the political elites that there is nothing to be gained from making security cock-ups publically notifiable. It doesn’t help that in the UK, the political power base of the Blair-Brown Labour government is undergoing a slow death, which only reinforces inaction.

Meanwhile, the authorities have invested heavily in the concept of compliance as a way of forcing companies to do their bit. As many people have pointed out, compliance enforces discipline on industry but it doesn’t necessarily lead to better outcomes in the long run. Compliance is, arguably, too nebulous a mechanism for improving real security and it is wrapped up in complex feedback mechanisms that have to do with avoiding the corporate criminality that festered (and still festers) in the US business world. The sector that has really benefited from compliance is the industry that performs auditing.

You can only applaud the CEO of a security company being honest. Schneier deserves credit for saying what many people believe to be true, that software companies get away with a shocking mediocrity when it comes to security. Software companies often suck in direct proportion to their success. They are rewarded for getting code out the door not for getting _good_ code out the door, with Microsoft’s history serving as a perfect embodiment of this odd distortion of commercial morality. But as compellingly fair it might seem on some levels, hiring a lawyer would not be the best way to make the software barons shape up and grow up.