It's "Computer MOT" time again, everyone

Microsoft's Trusted Computing Veep suggests vetting anything and everything which touches the 'Net. Perhaps this is not such a good idea? Security wonks can generally be placed on a 3D - or perhaps more-D - spectrum: on one axis there are...

Share

Microsoft's Trusted Computing Veep suggests vetting anything and everything which touches the 'Net. Perhaps this is not such a good idea?

Security wonks can generally be placed on a 3D - or perhaps more-D - spectrum: on one axis there are those who are naturally better suited towards defence (your stoic network firewall architects) than attack (your rabid penetration testers); another axis comprises the full-disclosure vs: restricted-information dichotomy - those who argue relentlessly about when and whether the technical details of a bug should be published.

At the risk of sounding Dungeons&Dragons the third axis is a question of alignment, of order versus chaos. Followers of order believe that the structures of the physical universe can translate to the digital, so they expound the advantages of Identity, Trusted Computing Bases and Mandatory Access Control (MAC); followers of chaos believe that MAC is cute but that with a good 'sploit you can DDoS the backbone and bring the Enterprise to its knees even if you can't read the launch codes. Followers of order counter that network access would be forbidden by policy, and the Chaosites respond with "have you seen this bug?"

Another trait of those at the far end of the order spectrum is that every so often one suggests that:

  • every person on the Internet needs a "driver's license", or...
  • every computer on the Internet needs an "MOT certificate"

This time it's Scott Charney - Microsoft corporate vice president for trustworthy computing - who's decided the latter and published a paper: "Collective Defense: Applying Public Health Models to the Internet" in which he dives quickly into collectivist concepts of shared risk, using them to justify regulation of network usage.

His blog cover-posting explains:

Just as when an individual who is not vaccinated puts others' health at risk, computers that are not protected or have been compromised with a bot put others at risk and pose a greater threat to society. In the physical world, international, national, and local health organisations identify, track and control the spread of disease which can include, where necessary, quarantining people to avoid the infection of others. Simply put, we need to improve and maintain the health of consumer devices connected to the Internet in order to avoid greater societal risk. To realise this vision, there are steps that can be taken by governments, the IT industry, Internet access providers, users and others to evaluate the health of consumer devices before granting them unfettered access to the Internet or other critical resources.

The paper itself includes the bald assertion that "to address cyber threats generally, and botnets in particular, governments, industry and consumers should support cyber security efforts modelled on efforts to address human illnesses" - but why? Beyond the fact that infestations of both computers and humans are called "viruses" there's the disparity that computers are infinitely repairable, cheaply fixable, and software and system architectures can be made hostile to infection in a way that we cannot reverse-engineer into humans.

The simplest way to make a computer immune to infection is to make it incapable of infection. Human beings cannot replicate that.

However: Charney's paper represents a beautiful strategic tactic, corralling Linux, Unix and Macintosh "Ha Ha We Don't Need Antivirus" fanbois into the same space as vaccination-deniers and other Typhoid Marys; Charney's position only stands by application of the "No True Scotsman" fallacy of Computing, which goes:

  1. Person A: All users of the Web need Anti-Virus and Anti-Malware software installed.
  2. Person B: I don't need AV, there are almost no viruses for my platform and it's natively very robust.
  3. Person A: Ah, well; you're not a real user, are you?

...but once regulation or compliance requirement has been passed to mandate use of "antivirus", all the "un-real" platforms can be expunged for being uncompliant.

It's actually worse than that: iPhone, Android, legacy, embedded - there are innumerable systems which will not fit this model, and even your dishwasher will soon be on the net. They cannot all be regulated, they cannot all have antivirus and firewalls, and they should not have their adoption hampered by failing to meet some inapplicable and very Windows-centric metric.

A few more quotes from Charney's paper:

To realize this vision, governments, the IT industry and Internet access providers should ensure the health of consumer devices before granting them unfettered access to the Internet.

...

Under this model, a consumer machine seeking to access the Internet could be asked to present a "health certificate" to demonstrate its state. ... If the health certificate indicates a problem, a range of options are available.

...

If the problem is more serious (the machine is spewing out malicious packets), or if the user refuses to produce a health certificate in the first instance, other remedies such as throttling the bandwidth of the potentially infected device, might be appropriate. As devices converge ... denying a user complete access to the Internet, even for a short period, could well have damaging consequences. For instance, an individual might be using his or her Internet device to contact emergency services and, if emergency services were unavailable due to lack of a health inspection or certificate, social acceptance for such a protocol might rightly wane.

Yes indeed, "social acceptance might wane" with Government-imposed restrictions upon network access - and knowing that legislation and compliance standards cannot keep up with the diversity of the Internet and with "Internet Time" we can be pretty sure of the ossification of these controls and their subsequent drag upon innovation. Neither is addressed the unintended consequence of standardisation and regulation: a lack of web-diversity leading to yet another brand of security problem - just ask the Chinese Government about that.

Regrettably at any and all extremes of the spectrums of security-thinking you will find reasoned arguments for the incredible, the implausible, or the appalling - especially when viewed from the other extermity of the same spectrum. This paper sadly represents a fine example of the genre.