A few miles from the glitzy casinos of the Las Vegas strip stands a highly secure, 407,000-square foot building which - according to the man who operates it - is the most energy efficient, high-density datacentre in the world.

Rob Roy, the CEO, founder and chairman of Switch Communications Group, is walking the halls of his seventh and most impressive datacentre, the SuperNAP, from which he provides co-location services to some of the world's biggest organisations. At 3 p.m., one hour into giving a tour, it's clear that Roy is not a man who easily runs out of energy.

"Do we have a time limit?" he asks. "This is my last thing of the day, so I'll just talk till midnight."

The tour ends by 4pm, eight hours early, but Roy has plenty of time to explain why the SuperNAP is a safe bet for organisations with the strictest uptime and security requirements. What makes the SuperNAP so interesting? Here are some of the highlights:

Guaranteed 100% uptime

Five nines of availability doesn't impress Roy. "We give 100% service-level agreements, guaranteed," he says. "Obviously, that's a big monetary risk if I didn't feel this design was ready for that. Our NAP 4 facility, which is our next biggest site [and also in Las Vegas], for three years has had 100% uptime."

The SuperNAP (network access point) operates its own 250 megavolts ampere (MVA) substation, 146 MVA of generator capacity and 84 MVA of UPS (uninterruptable power supply), topped off with 30,000 tons of redundant cooling.

"Our network for six years has never had an outage," Roy says. "Every single part of it fails. In any given month, something fails. Blades fail, Cisco routers fail, carriers fail, Sprint fails, Verizon fails, AT&T fails. But we build this stuff in such a redundant manner. The chance of something happening with us where you have an outage is really less than anywhere from a design standpoint."

Militant security

Switch began life in 2000 as a government contractor and has built seven datacentres meeting the high levels of security demanded by government and military clients. Military-trained security staff protects the SuperNAP, with at least three guards on site at any given moment.

"By the time you got into this building, you went through our [US]$2 million blast wall, live armed security and six or seven layers," Roy says. "You went through biometrics." Although customer equipment is locked inside cages, Roy says the cages are superfluous. "I really believe at this site we would never have to put a cage on anything because by the time you're here you've been so researched."

Switch has built all seven of its datacentres in Vegas because of the region's relative lack of natural disasters. The SuperNAP is six miles from the Vegas airport, far enough to stay out of the path of incoming and outgoing planes, but near enough to be in a designated no-flight zone, guaranteeing that no planes will fly over the datacentre, according to Roy.

One other nice touch helps keep the SuperNAP equipment free of dirt and particles: Visitors must frequently walk over sticky white mats, which grab dirt from the bottoms of one's shoes and are common in the military, aerospace, microelectronics, pharmaceutical and hospital industries.

Focus on power and cooling

How to power and cool a datacentre efficiently is top of mind for most people in the industry today. Roy, as is his wont, claims to do so more efficiently than anyone else.

Datacentre efficiency is evaluated with PUE, or power usage effectiveness, a metric devised by the Green Grid which essentially compares the total power needed to run a facility with the power needed to run datacentre equipment.

According to a Google research paper on energy efficiency, "a PUE of 2.0 indicates that for every watt of IT power, an additional watt is consumed to cool and distribute power to the IT equipment."

A typical datacentre's rating is above 2.0, says Daniel Tautges, US president of data-centre management vendor GDCM.

Google has claimed an average PUE of 1.21 across six datacentres, and the most efficient of the six had an annual rating of 1.15 and a best quarterly rating of 1.13. These numbers are "fantastic," Tautges says. If the numbers are accurate, only about two-tenths of a watt (for every watt of IT power) is needed to cool Google's servers and distribute power to IT equipment.

Roy claims the SuperNAP's average PUE rating is 1.146, just a hair better than Google's best annual rating. He also says he expects the Green Grid to certify his contention that the SuperNAP is the world's most efficient, highest-density datacentre. (A Network World inquiry to the Green Grid found that no such effort to examine the SuperNAP's density and efficiency has been undertaken, but the industry consortium says it will soon launch a certification program that will validate PUE ratings for specific companies.)

Numbers aside, Roy lists many factors that make the SuperNAP unusually efficient and dense. Instead of using indoor computer room air conditioner (CRAC) units, the SuperNAP has 600-ton air handlers plugged into the SuperNAP from the outside. Specially designed software constantly analyses the environment and automatically switches between four types of chiller systems depending on the time of day, temperature and moisture. DX cooling, chilled water, indirect evaporative and direct evaporative cooling are all in use.

"That unit adjusts moisture in the building many times faster than any CRAC unit ever could," Roy says. "We have almost 6 million cubic feet per minute of air we can push [through the datacentre]. We can change the air in this entire datacentre every two minutes."

You might be wondering why a building with such massive cooling needs would be located in the Las Vegas desert. But dry air is the most efficient for cooling, and the temperature drops at night and during the winter, Roy says, adding that the temperature is below 68 degrees Fahrenheit in the Vegas Valley for more than half the year (5,000 hours).

Because the cooling is so efficient, Roy can deliver watts to servers at densities of 1,500 watts per square foot. Typical datacentres built today can handle heat loads of only 350 to 500 watts per square foot, according to AMD.

According to Roy, customers at other co-location centres might be limited to 250 watts per square foot, an amount that can be exceeded with blade servers and other modern IT products.

If the SuperNAP is delivering far more power per square foot than other datacentres, wouldn't that negatively impact the facility's efficiency? Roy says no. "Efficiency has nothing to do with consumption," he says. "You can consume a lot efficiently or inefficiently, or you can consume the most in the world efficiently or inefficiently."


It's one thing to deliver 1,500 watts of power to each square foot of datacentre equipment, but quite another to have the communication backbone for sending data quickly and inexpensively all over the country. Switch's NAP 2 facility was built by Enron, which had planned a commodity bandwidth exchange.

"Enron was going to take their trading algorithm for energy and they were going to trade peering on the Internet," Roy says.

Out of the ruins of Enron, Switch purchased the building in 2002, turning it into a datacentre and gaining access to a connectivity hub with direct connections to the national backbones of 26 carriers.

"Switch is home to a fiber interconnection nexus like no other," the company boasts on its website. "Twenty-six national carriers are physically on-net within our datacentre campus, providing our customers with multiple connectivity options and commodity exchange pricing. We are carrier neutral and can cross-connect you to any of our on-net carriers without a local loop charge, for a fraction of what you might pay elsewhere."

Negotiating bulk rates from many carriers on behalf of customers has a huge effect on price, Roy says. "If you're only paying $80,000 a month on your co-location and I save you $100,000 dollars on your bandwidth bill, it's free," he says.

(Image from