Share

Ferrari's name is one soaked in history, and is redolent of everything concerned with sports cars and motor racing. And with today's motor racing as much about data acquisition and processing as it is about spanners and oil, Ferrari's focus on keeping its data centre cool is keen.

A huge part of any modern Formula One team's drive for more speed is the study and application of the aerodynamic behaviour of the vehicle which translates into the discipline of computational fluid dynamics (CFD).

The types of problems that Ferrari is trying to solve aren't confined to motor racing of course. Many enterprises have similar issues. Examples range from processor vendors -- who need CFD to find better ways to cool their chips -- to petro-chemical companies, who want to streamline the flows of raw materials, to academic institutions.

But get the aerodynamics right in the ultra-competitive F1 series and it can make huge difference to the result -- all other things being equal. So Ferrari, which is going from strength to strength when it comes to demand for its road-going sports cars, even if its F1 team has not enjoyed as much success recently as it did in the previous eight years, has dedicated one whole data centre purely to improving its F1 cars' aerodynamics.

Inside the data centre lives a supercomputer consisting of AMD Opteron-based Linux server clusters running CFD simulations, plus related storage systems from EMC and HP. The centre, which lives at one edge of the company's main car production site, houses a variety of systems in its clusters, including HP ProLiant DL360s and DL380s, IBM xSeries servers, and Sun Fire X4100s and X4200s.

We spoke to data centre manager Massimo Martelli with the sound of the F1 car hurtling round Ferrari's private test track as background noise. Martelli said his cool room contained some 1,000 CPUs although, when we visited, he said he was expecting the delivery of a further cluster the next day. And the system is used not just for static calculations but also processes real-time data during the race and feeds the results back to the crew on the pit wall.

Given intensely secretive nature of F1 teams when it comes to technology, Martelli was understandably reluctant to divulge many -- indeed any -- details of this process. What he was keen to show however was his new APC InfraStruxure cooling system.

The philosophy behind APC's rack-mounted cooling system is that bringing cooling as close as possible to the heat source is more efficient. The system is claimed to improve availability by placing the cooling system within the row of racks and is designed to cool power-dense servers such as blades. The result, reckons APC, is a lower total cost of ownership and improved adaptability. APC reckons its system provides up to ten times the cooling of traditional approaches, said to range from 2–3kW per rack.

Ferrari's system consists of the company's NetworkAir In-Row air conditioner and a Hot Aisle Containment System. This is a way of containing the heat in the top aisle and venting it in a controlled manner, which evens out hotspots, according to APC. Each installed rack in the Ferrari data centre generates 10kW of heat load, although the InfraStruXure system can handle up to 20kW per rack, according to APC.

The result of the system's installation, according to Martelli, is that he gets more space to add systems and racks as and when he wants without worrying about where the heat will be vented. This is because, according to APC, the system adapts dynamically to the demand for cooling.

APC reckons the system can save up to 30 per cent on capital costs compared to a standard fixed cooling system, plus it only provides cooling as required rather then being full on all the time, thus saving on opex.

Ferrari is a very competitive team and has continuity in Grand Prix racing since its first post-war race in 1947. It's been at or near the top of the sport throughout the last decade and intends to continue spending what it takes to stay at the top of this fast-changing sport -- a policy includes its data centre infrastructure.