Governments and large science and engineering organisations have used supercomputing (also known as high-performance computing, or HPC) for decades to solve the complex equations that describe the physical world.

Supercomputers can predict how well a new car design will survive an impact before a prototype is even built, or they can help geologists predict the best way to retrieve oil from a new well.

But HPC offers real benefits in a variety of enterprises, and even the most old-fashioned companies are starting to adopt supercomputers to run core business operations.

Manufacturing facilities, for example, are using HPC to compute optimum flow conditions for moulding parts, avoiding costly trial and error. And delivery companies are using supercomputers to calculate the most efficient routes, saving time and fuel.

The shift to building supercomputers with commodity rather than custom processors a decade ago has created an extensive body of knowledge about how to craft supercomputers from commonly available parts.

For these machines, price scales with size. The same technology that builds supercomputers for governments can now be used to build 100-processor machines for just about anybody with $10,000 (£50,000). And companies such as HP and Dell are selling preconfigured supercomputers on the web.

Your regional delivery company may have only 256 processors now, but as new opportunities emerge to improve processes from payroll to personnel management, it will need ever larger supercomputers. In the not-too-distant future, the world's largest computers may be as likely to serve your grocery store chain as they do governments and multinational energy companies.