A newly-completed supercomputer facility at the University of Toronto's SciNet Consortium is expected to place in the top 20 of the Top500 List of fastest supercomputers in the world.
With a peak processing power of more than 300 trillion calculations per second, the supercomputer will be used for research in the areas of aerospace, astrophysics, bioinformatics, chemical physics, climate change prediction, medical imaging and the global ATLAS project.
"This shows Canada definitely is on the world stage when it comes to doing high-performance-based research," said Chris Loken, chief technical officer for SciNet. "The 30,000 cores is a huge amount of computer power, and will enable cutting edge research in a whole variety of disciplines.
Loken said that there could very soon be about 50 projects using the facility.
The supercomputer runs on IBM System x iDataPlex servers using 30,240 Intel Xeon 5500 processors. Intel launched its Xeon 5500 server processors last March. According to Chris Pratt, strategic initiatives executive with IBM, the key to iDataPlex's efficiency is that it's designed to be "cognizant of the environment in which it's going to be deployed."
By that, Pratt is referring to the double-density design of racks where instead of the traditional thin and deep format, machines are turned 90 degrees for a wide and shallow setup so less passing air is required for cooling.
"We're able to remove more than a 100 per cent of the heat generated by the rack," he said.
And, Pratt said the ability for "dynamic stateless" provisioning of software means machines are provisioned with just what is required for a specific job before going "back to bare metal." He said that by not dynamically provisioning, "it's like a bus has a 100 seats in it, and if you only put two people in it, well guess what, you are carrying 98 empty seats."
Any cores not in use for about nine minutes are electrically powered off, but Pratt noted that will not often be the case given the researchers have an insatiable appetite for the compute power.
Besides compute power, energy efficiency was critical to SciNet for two reasons, said Loken. One was the limited operating budget, and the other, the fact that the supercomputer would be running global climate change modelling. "And we don't want to be contributing to that problem," said Loken. "That would be ironic and unfortunate."
The request for proposal dictated two clusters, a lot of storage, a datacentre in which to house the supercomputer, and that the next five years' power and maintenance costs would have to fit in the operating budget. "That put the onus on the vendors to do the optimization exercise," said Loken.
The power usage effectiveness (PUE) - calculated by dividing the power that's input into a datacentre by the power used to run the computers - of a datacentre is commonly 1.5, which means 50 per cent of the power is used for cooling and not actually running systems, explained Loken. But that of SciNet's supercomputer is 1.16, he said.
The Top500 list, a biennial list, will be announced on June 23. Such a list offers an understanding of how powerful the machine is, but it's not solely about power, but of the system capabilities as a whole, said Pratt. Using the example of an automobile, Pratt explained that it's not just how fast the vehicle moves "but how well does it take a corner, how good is the gas mileage, how safe is it?"
"It's easy to get wrapped up in the glamour of the Top500 position," said Pratt.
Given the rapidity at which technology evolves, he said by November when the list is updated again, SciNet's supercomputer "will be way back down the list again ... that's business as usual."
Loken said the equipment and datacentre are designed to be "scalable, flexible, upgradable" and SciNet hopes to upgrade in the future if funding is available.