Nvidia supercharges deep learning at GTC 2018

© Nvidia
© Nvidia

Nvidia has announced a series of new products to further boost the company's deep learning credentials including a single server beast called the DGX-2 which can deliver two petaflops of computational power for AI workloads.

Share

Nvidia has announced a series of new products to further boost the company's deep learning credentials including a single server beast called the DGX-2 which can deliver two petaflops of computational power for AI workloads.

Speaking at Nvidia's 10th annual GPU Technology Conference (GTC) in Silicon Valley, founder and CEO Jensen Huang also announced that the company had doubled the memory of the Tesla V100 GPU to 32GB in order to tackle the most memory intensive deep learning and high-performance computing workloads.

The company's CEO identified the accelerating growth of GPU computing, which he says continues to soar. In order to keep up with this pace of growth, larger and more supercharged computers are needed.

"We're at the tipping point and the number of people that are jumping on top of GPU computing is really growing and growing at the best potential rate," said Huang. "We're almost up to 1 million GPU developers of 10 times in the last five years.

"If you take a look at the last five years, there's no question now that GPU-Accelerated computing is the right answer."

The Tesla V100, which is already widely adopted by researchers around the world, is now available across Nvidia's DGX deep learning system portfolio.

The company also announced the release of NVSwitch, a new GPU interconnect fabric built that enables up to 16 Tesla V100 GPUs to merge in communication at 2.4 terabytes per second in a single server node, which the company says addresses the problem of the PCIe bandwidth bottleneck for multi-GPU systems and creates a faster and more scalable interconnect.

Huang said this will enable developers to design systems with more GPUs ‘hyperconnected' to each other.

The CEO also unveiled the new Nvidia DGX-2, the first single server able to deliver two petaflops of computational power, and it's designed to tackle the most complex AI challenges. The DGX-2 is designed with the processing power of 300 servers occupying 15 racks of data centre space. It is also built with 30 terabytes of storage space and is 10 times faster than the previous benchmark of the DGX-1 just six months ago. It will be generally available in the third quarter of 2018 for $399,000 although some data scientists are already using the product.

"Many of these advances stand on Nvidia's deep learning platform, which has quickly become the world's standard," Huang added. "We are dramatically enhancing our platform's performance at a pace far exceeding Moore's law, enabling breakthroughs that will help revolutionise healthcare, transportation, science exploration and countless other areas."

"Recommended For You"

Graphics processing units boost Appro server cluster US funds effort to regain supercomputing crown from China