At the most basic level, cryptography is the science of using maths to protect information. Paul Kocher, president and chief scientist at Cryptography Research, has made a career out of using these algorithms to protect companies from fraud and piracy. We talked with him about cryptography's history, present, future, and how it will continue to fit into the changing security landscape.
How has cryptography evolved over the years?
More than a hundred years ago it was almost exclusively the domain of governments. The largest wide-scale user of cryptography was the Catholic Church. In order to manage its empire, the church needed to be able to communicate with remote outposts and ensure those communications were both secret and unmodified, so cryptography was an essential piece of that. In wartime it became critical from a government perspective. The paths by which information was physically transported, whether telegraph or radio, were inherently vulnerable to capture and eavesdropping, so cryptography was very important. In the 1970s, banks became significant users because they realised they had large networks and little ability to physically secure communication channels. Today, the trend is toward a more broad use of cryptography. It's showing up in virtually any sort of electronic device that has to process information with security attached to it. You'd be hard-pressed to think of any gadget these days that processes information yet doesn't use cryptography to some degree.
What are some of the potential future applications?
In 10 years, cryptography will be cheap enough to use in order to protect brand identity. For example, toothpaste coming from China that is bearing the brand of a company that didn't make it. There's a huge incentive for that brand to put a chip associated with their product that proves it's their product and not an impostor. I also think it's inevitable that we will see chips in every ID card or credit card. They'll all become cryptographic devices.
What kinds of attacks are cryptosystems subject to?
The one thing you don't need to worry about with modern systems is that the algorithms will break. If you're using the advanced encryption standard or the RSA algorithm with 1,500-bit or larger keys, those systems are incredibly unlikely to be broken by someone directing a mathematical attack against the design. Where they fail is in the implementation. If the keys to unlock the data can be accessed without having to do a frontal assault on the algorithm, then the security can break. The number-one issue is implementation bugs: software where you have buffer overflows that will let someone break into a machine. It doesn't matter how strong the cryptography is if someone is running malicious code in the CPU and can access the key. The problem with implementation defects is getting worse as systems become more complicated. The global trend is toward less security and easier access for those interested in tampering with data.
How can we overcome this?
On the one hand, it's just the landscape we have to deal with, but there are technology decisions that can make a dramatic difference. In my company, we deal with highly sensitive data, so we run a network with no connections to the outside world. That immediately solves a lot of problems. If you ask yourself, "Could this system be broken?", the answer is always going to be yes or maybe. No useful system is impenetrable. But if you think of it as a risk equation and ask yourself if the value delivered by this system is appropriate for the risks it introduces, and are there ways you can reduce those risks, very often you find effective techniques that don't cost very much.
Looking ahead, what will be the biggest challenges for security of cryptosystems?
We've already talked about the challenge of increasing complexity, which is making it more difficult to protect information. The second dimension to that is the problem of user education. It's pretty easy to build a security system where a perfect user could operate it securely but end users aren't necessarily consistent in doing things right. The third is an economic challenge. People who suffer the risk and those who are in a position to pay for and deploy mitigation measures are different entities, which results in an economically suboptimal spending on security. The big nasty problems like spam, piracy or operating system security, these are problems where entities who do not suffer the brunt of the problem are the ones securing mitigation measures. ISPs have the largest control over spam, but the recipient incurs the cost. Similarly, if an OS security disaster affects your laptop, Microsoft isn't spending thousands of dollars to fix it; you are.
So how do we go about mitigating these risks?
In many cases we don't. When there is a lack of alignment with economic interests, there are really two approaches that can solve the problem: technological changes that realign control into the hands of the entities that incur the risk, and legislative solutions. Those would include mandates that ISPs filter outgoing messages that are going in high volumes and that particular product protection technologies be implemented. That may be inefficient, but in many cases it's the only way to handle these problems.