In the days before the SOPA blackout a popular meme infected the interwebs:
Directed at the US Government this article and its related discussion decried the creation of new legislation founded upon both a lack of understanding and a lack of evidence - challenging behaviour that is also exhibited UK Parliamentarians if not by all governments worldwide.
What the geek community at that time failed to consider was itself as a source of apparent evidence which might be repeated by journalists and/or relied upon by by lawmakers, especially when it lends weight to a higher goal that they wish to achieve.
For a nascent example...
Let's begin with an analogy
Consider a large hospital - there are many patients and they will suffer diverse ailments; perhaps newborn infants with colic or elderly smokers with emphysema, motorcycle crash victims or those with various domestic cuts and bruises. Some can be healed quickly, some slowly, and some not at all.
Next: consider how long each patient has been ill - not necessarily since clinical diagnosis was made but instead since they first suffered symptoms; the first sneeze, cough or chest pain. Identify the person in that hospital who is there having suffered their condition the longest time - perhaps a smoker who started coughing 20 years ago - and record that person's timespan as a metric against the hospital.
Finally: repeat this task for many hospitals. Line up the per-hospital longest-sickness timespans sorted in numerically increasing order. Pick the middle figure from this list - or take a mean average across the whole list if you prefer.
Having done this, have you created a useful and valuable metric?
Not really - or at least not by doing this only once; what this process has created is a list of individual worst-cases per hospital:
If we compare hospital to hospital by these metrics then we are actually comparing them by proxy, one worst-case individual to another.
If we build up a history of this data over time then each hospital will still be measured only by performance of an edge case rather than typical performance.
If we quote the overall average figure then we're (again) using it as a proxy, this time for the health of the entire population of all hospitals.
The only thing to be said for this average is that it would be nice if it got smaller over time - but the means of generating it practically guarantees that the average will be a) quite big and b) atypical across the entire population.
"the average cyberespionage attack goes on for 416 days before a company discovers it's been hacked" bit.ly/JkjJAH via @KimZetter
A statement like this demands context and evidence if it is to have value:
What kind of average are we talking about? Mean, mode or median?
How did you come to these figures? By what process? How do you determine when the cyberespionage began? Are you counting outliers or is this a typical value?
What is the definition of cyberespionage in this context? Are we talking about actual spies or just that somebody's system caught a virus?
Is this better or worse than 1, 5, or 10 years ago?
According to Richard Bejtlich, chief security officer for computer security firm Mandiant, which has helped Google and many other companies conduct forensics and clean up their networks after an attack, the average cyberespionage attack goes on for 458 days, well over a year, before a company discovers it's been hacked. That's actually an improvement over a few years ago, he says, when it was normal to find attackers had been in a network two or three years before being discovered.
Sticking an oar into the waters of Twitter led to feedback from Mandiant Sales Engineer Lucas Zaichkowsky:
Mandiant specializes in targeted intrusions. Most incidents are related to espionage. Some are financial. The stats are real.*
M-Trends 2012: 416 days is median time from earliest evidence of compromise to Mandiant's involvement.*
M-Trends 2012 press release says it's based on hundreds of investigations over the past year.*
Hence the analogy above - so Mandiant arrives at a company to investigate a intrusion, determines the age of the intrusion / of this particular security illness and uses that figure to label the company (cf: hospital) - so if one victim machine (from a few thousand healthy ones) bears evidence of a two year old intrusion then that company will measure 730+ days on the insecurity scale.
Then we line up all the companies in sorted order and pick the middle figure (416 days) or take an arithmetic mean (458 days) - and this we publish in the report...
So where's the beef?
It would be unjust to criticise the Mandiant report in general; ignore the headline statistics and if you survive the security buzzword drinking game ("APT? Drink!") there is much comfort to be found:
TECHNOLOGY COMPANY: 63 TOTAL COMPROMISED SYSTEMS TOTAL SYSTEMS = 30,000
FINANCIAL COMPANY: 453 TOTAL COMPROMISED SYSTEMS TOTAL SYSTEMS = 50,000+
HIGH TECH DEFENSE 102 TOTAL COMPROMISED SYSTEMS TOTAL SYSTEMS = 6,000
...and if their stats are as real as Zaichkowsky says (tested 30,000 machines?) it is heartening to learn that there are large tech companies out there with merely 0.21% rates of pwnage. There are also interesting breakdowns of what kinds of malware Mandiant discovered, what spoor provided evidence of previous infestation, expositions on passive backdoors and that they are nothing new, on why there is so much off-the-shelf malware, and much explanation that the largest threat surfaces are to be found when two or more corporations merge or partner.
The big issue is the mediagenic, quotable, headline statistic:
416 [is the] median number of days that the attackers were present on a victim network before detection
...in the colourful sidebar on page 3, expanded upon by Bejtlich for WiReD and spontaneously cited by Hill. Figures like this are attractive nuisances for cyber-inclined journalists in search of a byline - the 416 days quote seemingly implies some 15 months of idle ignorance on the part of an average firm whilst hackers run riot through their systems.
And we are all nearly-average firms, surely? Therefore are we all pwned?
It's an implicit, attractive, scary story - and for all we know it may very well be correct except that you cannot validly get to the implicit inference from the statistics as presented. Aside from the "age" issues outlined in the hospital analogy above there is also an inherent selection bias that a security consultancy is more likely to be called into a firm which has a problem. Richard Bejtlich flatly denies that Mandiant is making any claim outside of the scope of the report, and he is totally right - they are not; however that does not stop the media spinning a story by means of tweets and selective quoting.
It would be less open to poor inference if Mandiant had said something like: the worst case customer we experienced had been pwned for five years without knowing it - but they did not say that; instead they headlined a different statistic, one of their own making, precise but not terribly meaningful.
A mountain out of a molehill?
The information security industry is at diverse risk of regulation of what we are obliged to do and how we go about doing it, and this risk is not aided by providing the Government with sticks with which to beat us. To repeat an old saw for this column: when a Home Secretary attributes a spike in cyberthreats to a misunderstanding of how polymorphic malware works and when US Cyber-General Keith Alexander says:
... malware is being introduced at a rate of 55,000 pieces per day, or one per second.
...and we find that he is actually echoing a McAfee marketing report (PDF, pg.7)
We have seen the platforms it targets evolve every year with increasingly clever ways of stealing data. In 2010 McAfee Labs identified more than 20 million new pieces of malware.
Stop. We'll repeat that figure.
More than 20 million new pieces of malware appearing last year means that we identify nearly 55,000 malware threats every day. That figure is up from 2009. That figure is up from 2008. That figure is way up from 2007. Of the almost 55 million pieces of malware McAfee Labs has identified and protected against, 36 percent of it was written in 2010!
...then faced with the Strangelovian lunacy of governments recycling marketing figures that they do not understand to justify budgets which they can ill-afford in order to bring about ends which are oppressive, illiberal and which are shooting at the wrong target anyway, one cannot but rage at providing them with even more sexy but meaningless figures to misuse.
"Stop. We'll repeat that figure."
So please let this be a call to action for the entire security industry - if you are going to provide glossy marketing brochures:
Do please lay out your figures in plain english. Do please show your working. Do please show your math. Customer confidentiality is still satisfiable.
Just please stop with producing cyberfud headlines.
 Been there, done that.
 Further: if you change the number of hospitals year-on-year then you will distort the median if the overall list length is too small.
 This article cites the WiReD article, as-revised to quote the mean figure following the events documented in this article.
 CORRECTION: previous version of this article suggested that Mandiant determined the age of the oldest exploitation within the company, intrusion, botnet, malware or otherwise - 0039BST 2012/5/9
 Something else which the Mandiant report, to its credit, makes clear. pg.16