Advanced Micro Devices is cutting costs and reducing the number of its data centres worldwide with the help of the cloud and hardware upgrades, an AMD executive said Thursday.
AMD will reduce the number of data centres it has to three by 2014, of which two will be in North America and one in Asia, said Farid Dana, director of IT services at AMD, in an interview. AMD currently has 12 data centres, down from 18 in mid-2009 when the consolidation effort began.
The company's goal is to cut costs by shifting more tasks to the cloud, and by opening data centres in locations that have lower power costs and lower taxes, Dana said. AMD is moving away from high-cost-per-watt places like Boston and California and establishing data centres in places like Suwanee, Georgia.
"We've gained some tax efficiency from the location," Dana said. "One of the factors is also disaster recovery. That's why we have three data centres and not one, and we are geographically dispersed."
Dana has a list of 40 physical factors to take into account when deciding where to locate a data centre, include proximity to transit, weather, water sources and available electricity. Choices have to be made carefully, and something as simple as a nearby rail line could cause vibrations that harm server operations, he said.
Reducing network latency
But in downsizing, Dana wants to ensure AMD's engineers have access to the resources needed to design chips. AMD is trying to consolidate servers and reduce expenses such as electric bills through higher utilisation rates. The company is also reducing network latency so engineers get quicker access to servers.
AMD is operating a private cloud that makes key EDA (electronic design applications) accessible to engineers worldwide. The company's engineering tasks are executed in real time across a virtual grid of servers that has 120,000 CPU cores. AMD tries to maintain close to a 100 percent utilisation rate, and virtualisation tools help all cores seem like one "giant number-crunching machine," Dana said.
"We want to do compute anywhere -- it doesn't matter where the engineer sits as long as they get the performance they need," Dana said.
Putting applications in the cloud consolidates computing resources and centralises the computing infrastructure, Dana said. Data is more secure because it is stored in fewer, centralised locations.
Many companies offer cloud services, such as Amazon, but AMD kept an internal cloud as it wanted to have stronger control over usage of EDA tools. The company has deployed specific tools to track down where resources need to be assigned, and cloud transactions change by region as employees worldwide have been assigned different tasks, Dana said.
Socket compatability is key
"It's not cost-effective to do it externally," Dana said.
Over the past few years, closure of data centres has resulted in huge savings, Dana said. The company is retiring old data centres as contracts end and as hardware retires, replacing it with new equipment and hardware, which requires the same level of investment as upgrading existing data centres.
Idle CPU cycles cost the company, and server upgrades have netted AMD millions in savings, Dana said. Socket compatibility provides a cost-effective way to upgrade to faster and more power-efficient chips without buying extra hardware.
"For socket upgrades you have to do your homework," Dana said. Upgrades could be done to cut costs or add performance, or to test out new chips, he said.
But as servers move to the next generation, it's better to change the motherboard, Dana said. Hardware depreciation could range from three to five years.
"It's more cost-effective to change the board than to put the processor on top of it," Dana said.