Share

The botnet business model is alive and well and apparently unstoppable

It sounds counter-intuitive but a new analysis from Imperva’s Incapsula division has confirmed one of the Internet’s most surprising secrets: a large proportion of website traffic isn’t generated by human beings.

The firm’s figures for 2015 (measured between July and October on websites using the firm’s security) found that roughly half of all traffic was generated by automated bots, both good ones such as search engine spiders (19.5 percent) and bad ones such as spam engines and pricing scrapers as well as DDoS traffic (29 percent). Only a fraction over half was initiated by a person clicking a mouse.

Normally this traffic is invisible and nobody other than ISPs and website owners pay any attention or is even aware of it. Bad bot traffic is bad for reasons that are self-explanatory and it has been growning steadily as a proportion of all traffic in recent times. However, in 2015 Imperva’s number-crunchers noticed something unexpected: the volume of human traffic is rising relative to bot traffic. It sounds innocuous but it means that automated traffic is still growing but less rapidly than human traffic which is being boosted by the Internet’s spread in developing countries. This might or might not be temporary as the Internet adjusts to being global and not just something of importance in the US, Europe and bits of Asia.

The bad bot traffic is still a major issue with DDoS more than making up for a decline in spambots caused by Google’s Penguin update from 2012 onwards that penalised sites linking to spam hosts. Meanwhile, the alarming growth of DDoS is contentious with some arguing (see our slideshow on Level 3) that service providers can only turn back this tide by intervening at network level. Fixing the problem using mitigration is at best just a way of deflecting it somewhere else. 

We decided to ask report co-author and Imperva digital strategist Igal Zeifman for his views on the hidden world of bots, the good the bad and the frankly hopeless. It matters but if we're not aware of it, how?

Computerworld UK: Can you explain the botnet situation as you saw it in 2015 – what are the trends?

Ziefman: Bad bots originate from botnets, so the increase in their activity comes in parallel with the increase in botnet resources. We saw a steep increase in DDoS botnets in Q3, which led to double the amount of attacks on our clients when compared to Q2.

It was surprising to hear you make a distinction between ‘good’ bots and bad ones. Aren’t bots always bad?

Ziefman: Not at all. When a Google crawler goes through your website to index its pages, or when a marketing too is used to establish how popular a piece of content was on social media, these are good bot visits. Any type of legal interaction between an automated piece of software and a website is considered a ‘good bot’ visit.

Imperva also notes that their activity is lagging the growth in users. So why does the activity of good bots matter anyway?

Ziefman: Good bots are a big part of the Internet ecosystem. Also, they are behind a good chunk of bandwidth consumption. Specific crawlers, such as Googlebot, provide crucial business functions.

You found that traffic generated by humans crossed the 50 percent mark in 2015 – how did you measure this and what does it signify?

Ziefman: We sampled 19 billion visits to websites on Imperva Incapsula's network. As for the significance of the 50 percent mark, I think it changes the way we think about online traffic. For us, during the past four years, it was clear that most visitors were not human and the number was expected to continue growing. This year’s data points to a different trend, one which we need to investigate further. Simply put, if you had asked me a year ago to predict the state of online traffic in 2026, I would have estimated that 80 percent would be bots and 20 percent would be humans. Having a data set that forces me to consider an opposing scenario shows just how complex the traffic landscape really is, and it is affected by trends in cyber space and the real world.

And yet botnets remain a major force so what, if anything, can be done to counter them? They seem almost irresistible.

Ziefman: The lack of security awareness is the major enabler for the creation of botnets, which are large clusters of compromised, under-protected, connected devices. Furthermore, the promise of profit is a major motivation for people who compromise these devices. For example, in a world where paying a ransom to a botnet operator is completely out of the question, no one would use a botnet for an extortion attack. Botnets are a threat, and dealing with them begins with promoting some general ground rules. In the absence of these rules, anarchy will continue to thrive.

What about botnets used in DDoS attacks? These seem to be the dominant force in botnet activity.

Ziefman: Absolutely. In our research, DDoS bots are categorised as Impostors, which are behind nearly a quarter of all visits. Now, other types of attacks may employ Impostor (browser-like) bots as well. However, DDoS bots are the majority of that traffic.

Can anything be done to stop the growth of DDoS-for-hire botnets? Some firms such as Level 3 advocate dealing with them at service provider level and yet others in that industry are reluctant to get involved.

Ziefman: Yes. I think there is a lot that can be done against them on the service provider level, and I think that Internet Service Providers (ISPs) are in a good position to prevent DDoS attacks in general. However, I also think that there are things that can be achieved on a regulatory level. Many of these attacks are motivated by extortion attempts. A rule or law that prohibits ransom payments to DDoS extortionists, or at least severely limits a company’s options and a perpetrators likelihood of profit, would be extremely helpful. Also, the prosecution of botnet-for-hire operators, which we have seen in recent years, is an important step towards eliminating the motivation behind DDoS-for-hire activities.

You note the continued decline in the volume of spam bots, in your figures only 0.1 percent of traffic. What caused this and what does it tells us about the relative decline of the spam model?

Ziefman: The creation of backlinks for Black Hat SEO purposes was one of the major motivations behind spam attacks. The decrease in Spam bots is tied to Google’s decision to penalise websites that benefit from these links, and also anyone who hosts spam links. I don't think this means the end of spam and, in our case, 19 million spam attack incidents (0.01 percent of the 19 billion visits in our sample) is still a lot. Having said that, I think that spammers have lost one of their main incentives. I also think that the fear of penalty has led more website owners to take steps towards hardening spam protection in their comment sections.

The Internet has been a phenomenon of the developed world until now but this is changing quite rapidly. How might this change or perhaps even worsen botnet crime?

Ziefman: Bad bots originate from compromised devices and are injected with remotely operated malware. These are found more often in developing countries, which have a higher percentage of first-time computer/device owners, looser security standards and vendors who are generally less security aware. The experience of more frequent Internet usage promotes awareness, which makes a hacker's job more difficult.  

You allude to inter-governmental action. Is this realistic? Cyber-extortion seems to almost unstoppable in the absence of internal police enforcement.

Ziefman: I think it has to be. This is the new reality, and we have to be creative, but I`m not sure why we should be accepting of a situation where an organization has the choice to pay off cyber criminals, or where services that enable cyber extortionists are allowed to operate in broad daylight, as is the case with DDoS-for-hire services. I think that a decisive international policy is the right answer and I think it will happen sooner than we think. This week, the world came together and recognised the necessity of ground rules that help preserve our ecosystem. If we can deal with an issue of that scope, laying down some ground rules to deal with cyber criminals shouldn’t be that unrealistic.