Legislation, financially driven attackers and high-profile breaches have changed the economics of security. We need to rethink the motivations of attackers and the new attacker economy given a growing stolen identity information trade and the rise of organised electronic crime. We need to study hackernomics. This is a new term so allow me to offer a definition:
Hackernomics (noun, singular or plural): A social science concerned with description and analysis of attacker motivations, economics and business risk. It is characterised by five fundamental laws and eight corollaries.
Most attackers are not evil or insane - they just want something.
Some people work on the premise that hackers are evil but in reality most attackers are looking for weak targets and the path of least resistance. This is actually very good news and leads us to Corollary 1.a.
We do not have the budget to protect against evil people but we can protect against people that will look for weaker targets.
This tells us that security – as well as the appearance of security – are critical in reducing business risk. Many companies struggle on what level of investment to put into security. Entry-level security means barely passing compliance audits but companies that just squeak by are unlikely to be spared from attacks if they hold valuable information. This means that entry-level security must be at least as high as industry norms, especially considering that if a breach does occur, regulatory authorities will compare the victim’s security policy with industry “best practices”.
Attackers may attack you; auditors will show up.
Many organisations fear a compliance violation more than a breach. This is mainly because audits have created an impending event; someone will inspect security, which creates a much more compelling security business case than fear of a possible attacker.
Security is not about protecting something completely; it is about reducing a risk at some cost.
As an industry we do not know how to make non-trivial systems 100% secure but we can mitigate risks by investing wisely in training, process improvement and tools.
In the absence of metrics, we tend to over-focus on risks that are either familiar or recent.
Two of the most common mistakes in security spending are overcompensating and overspending on risks that have been in the media or are familiar. In 2006, a huge amount of money was spent on data encryption. While a major risk for corporations, a great deal of that spending was prompted by highly publicised breaches and not on a thorough comparative analysis of business risks.
Most costly breaches come from simple failures, not from attacker ingenuity.
Lost backup tapes, stolen laptops and unsecured servers were to blame for most of the high-profile data breaches since the inception of the first US disclosure legislation, California Senate Bill 1386.
Hackers can be very creative if given incentive.
Botnets, organised attacks and distributed denial-of-service attacks have shown that attackers can be very creative when motivated by target value or prestige.
In the absence of security education or experience, people – developers, users, testers, designers – make poor security decisions with technology
Technology often masks our natural instinct of safety and security. Because few developers and users are properly educated on risks, especially software security risks, decisions are often made to favour capabilities that they are more familiar with – such as usability and performance – which often run directly counter to security.
Software needs to be easy to use securely and difficult to use insecurely.
Users, developers and system administrators need software that is unobtrusively secure with secure defaults. This recognises the fact that systems developers know – or should – more about the security of their systems than users but that security should still be tunable to meet the needs of advanced users.
Developers are smart people who want to do the right thing.
Incomplete requirements, undocumented assumptions, lack of security knowledge and bad metrics can push them to do the wrong thing.
Attackers usually do not get in by breaching a security mechanism; they leverage functionality in some unexpected way.
Attackers take the path of least resistance, which usually is not fighting security controls directly. Instead, they are more likely to look for an architectural or coding flaw – such as SQL injection – in the non-security code that powers the business.
Security is as much about making functional code secure as it is about adding security controls.
We often spend a lot of time focusing on security mechanisms or security code when the greatest risks lie in the core way a system works.
In the new security frontier we need to protect network, information and business assets by empowering people through security education.
Herbert Thompson is chief security strategist for People Security, a provider of software security education. He can be reached at [email protected]
Find your next job with computerworld UK jobs