Microsoft called its first month of predicting whether hackers will create exploit code for its bugs a success - even though the company got its forecast right less than half the time.
"I think we did really well," said Mike Reavey, group manager for the Microsoft Security Research Centre (MSRC), when asked for a post-mortem evaluation of the first cycle of the team's Exploitability Index.
"Four of the issues that we said where consistent exploit code was likely did have exploit code appear over the first two weeks. And another key was that in no case did we rate something too low."
Last month, Microsoft launched the index , which rates each vulnerability using a three-step system that, in descending order of severity, said researchers or hackers would come up with a consistently working exploit, develop an exploit that worked only some of the time, or fail to craft attack code at all.
The predictions were valid for the following 30 days, or until the next cycle of patches was released.
Of the nine October vulnerabilities marked "Consistent exploit code likely," four did, in fact, end up with exploit code available, said Reavey, for an accuracy rate of 44%. None of the nine tagged "Inconsistent exploit code likely" had seen actual attack code. But Microsoft correctly called the four bugs last month tagged with the label "Functioning exploit code unlikely." As Reavey said, exploit code did not appear for any of the four.
All told, Microsoft correctly predicted eight out of October's 20 vulnerabilities' exploitability, an accuracy rate of 40%. (One of the month's 21 bugs did not receive a rating, as Microsoft said public exploit code was already circulating, making a label moot.)
That accuracy rate was down slightly from what Microsoft claimed during a five-month internal run of the index before it announced the program in August at the Black Hat security conference. According to a presentation Reavey gave at the conference, during that the five months it assigned ratings, Microsoft correctly predicted the exploit code availability of 17 out of 36 bugs, for an accuracy rate of 47%.
October's showing didn't faze Reavey, who said what is key is that Microsoft nailed the four for which exploit code was unlikely. "It's important that we don't rate something less likely [to have exploit code] than it turns out to be," he said, "because then customers would have inaccurate information for prioritizing patches."
Microsoft has promoted the index as another piece of information that users, particularly enterprises, can use to decide which vulnerabilities should be patched immediately, and which ones can wait.
A lot of criticals
That's how John Pescatore , an analyst and research fellow at Gartner Inc. , sees it as well. "Take a month like October, where you have a lot of criticals," Pescatore said, referring to Microsoft's most serious ranking in its four-step threat scoring system.
"If you're trying to decide which to patch first, you can next look at the [Exploitability] Index to rank the criticals and prioritise your patching."
While he agreed with Microsoft on the use of the predictions, he questioned their worth. "That's the only thing it's good for," he said, "to prioritise your patching. I don't think there's any accuracy here. Microsoft doesn't have a real way of forecasting exploits."
When Microsoft first briefed Pescatore about the index last summer, he said his response was: "What does this mean to me? Consistent, inconsistent, it's still not enough data. And they're erring on the fear-factor side," he added, pointing to the large percentage of October vulnerabilities - 16 out of 20, or 75% - that Microsoft said would see some kind of exploit code in the following 30 days.
"I'd rather see them use standard CVSS (Common Vulnerability Scoring System) ratings like everyone else, because they take into consideration exploitability," Pescatore said. The CVSS system is used by, among other organizations, the United States Computer Emergency Readiness Team (US-CERT).
Black swan' effect
Andrew Storms, director of security operations at nCircle Network Security, had a slightly different take on Microsoft's prognostications. "I think it makes complete sense that they would be conservative," he said. "They're trying to avoid the 'black swan' effect. If they ran into a black swan, it would be more detrimental on both Microsoft and its customers."
The black swan theory posits that there will always be major, disruptive events that are impossible to predict.
"So if [the Exploitability Index] was inaccurate on the wrong side, I would be more concerned," Storms said, nodding to the four "Functioning exploit code unlikely" ratings that Microsoft got right.
Like Pescatore, Storms also questioned the usefulness of the predictions. The existence of exploit code doesn't necessarily mean a working exploit is on the loose and attacking PCs, he said.
"There is some value for those who need another data set," he conceded. "But it's still a little early. Most enterprises haven't been able to migrate it into their patch cycle in just 30 days. That's just too much to ask of them. Many are a month or two behind in rolling out patches."