As Richard Stallman constantly reminds us, there are strong moral grounds for adopting free software. But whether or not you accept that line of argument, there is another extremely good reason for taking this route: open source is better.
Not just better as in runs faster, or uses less memory, or costs less to run, but inherently and inevitably better. That's why IBM adopted GNU/Linux for its entire line of hardware a decade ago, and why more and more savvy companies are taking the open way today.
Here's another fairly dramatic hint that the future is open:
We started a project at Facebook a little over a year ago with a pretty big goal: to build one of the most efficient computing infrastructures at the lowest possible cost.
We decided to honor our hacker roots and challenge convention by custom designing and building our software, servers and data centers from the ground up.
The result is a data center full of vanity free servers which is 38% more efficient and 24% less expensive to build and run than other state-of-the-art data centers.
But we didn't want to keep it all for ourselves. Instead, we decided to collaborate with the entire industry and create the Open Compute Project, to share these technologies as they evolve.
In a way, Facebook's move is not so surprising. After all, the company is built on open source software from top to bottom as the page listing the many projects it uses, contributes to and started demonstrates.
But Facebook not only has a pretty broad experience of using open source, it has a deep understanding of why it makes sense to go open:
Inspired by the model of open source software, we want to share the innovations in our data center for the entire industry to use and improve upon. Today we're also announcing the formation of the Open Compute Project, an industry-wide initiative to share specifications and best practices for creating the most energy efficient and economical data centers.
As a first step, we are publishing specifications and mechanical designs for the hardware used in our data center, including motherboards, power supply, server chassis, server rack, and battery cabinets. In addition, we're sharing our data center electrical and mechanical construction specifications. This technology enabled the Prineville data center to achieve an initial power usage effectiveness (PUE) ratio of 1.07, compared with an average of 1.5 for our existing facilities.
Everyone has full access to these specifications, which are available at http://opencompute.org/. We want you to tell us where we didn't get it right and suggest how we could improve. And opening the technology means the community will make advances that we wouldn't have discovered if we had kept it secret.
This is the key point. Opening up a technology allows others to contribute innovations that individual companies might never have devised on their own, or at least much more quickly. By sharing the benefits, the task of pushing forward a project is divided among the participants – the more people that use and contribute, the faster and deeper the development.
More and more leading companies are coming to realise that their competitive advantage no longer comes from infrastructural elements like custom software or hand-tuned hardware. These are now commodity items, and the most efficient way to get the best price and performance is to collaborate with as many other companies that have similar needs, and to use them to underpin a business strategy that operates at higher levels of the IT stack.
Facebook's latest project is important because it signals that commoditisation is moving into the field of hardware, now that it is so well advanced in the software field. Expect to see other, similar moves for general hardware platforms – routers would be an obvious example. Or data storage:
Facebook isn't stopping its hardware-tinkering efforts with the new data centers and servers it unveiled yesterday as part of its Open Compute Project. Next on its list of problems to solve is improved data storage.
The goals for the data storage project will be the same as those that guided the data center and server innovations, Frankovsky says: to be able to operate reliabily, at a large scale, while increasing the efficiency of energy use and decreasing the cost. And the company will take the same open approach as on the previous projects. Its goal in the end isn't to become a hardware company but instead to spur innovation in areas critical to the company's core business.
The move is also further confirmation that openness, sharing and collaboration are the future well beyond the realm of software where it is fast becoming the norm. Now, if we could only get the content industries to understand this too...