As to be expected, Nvidia made a series product announcements at its annual GPU technology conference (GTC) in San Jose this week, with the focus continuing to be on giving customers and developers the hardware and services they need to leverage artificial intelligence.
Speaking on stage at the San Jose State University Centre in Silicon Valley this week, CEO and founder Jensen Huang said: “The accelerated computing approach that we pioneered is really taking off. If you look at what we achieved last year, the momentum is absolutely clear.”
Read next: What we learned at Nvidia GTC 2018
From its focus on deep learning, to the numerous efforts being made to combine AI with accelerated computing, the vendor went on to signal its intent to enhance the data centre space.
As AI continues to be front of mind for enterprises it is natural to see multiple advanced technologies moving into data centres around the world.
Nvidia has been heavily expanding its data centre focus this year, and the chip maker made this very clear with its $6.9 billion acquisition of Israeli-based computing chip supplier Mellanox last week.
The company also unveiled the launch of CUDA-X AI this week, a set of what it calls 'end-to-end acceleration libraries' for data science.
CUDA-X AI includes integration into popular deep learning frameworks such as TensorFlow, Pytorch and MXNet but with automatic optimisation. It also includes cuDNN, cuML and Nvidia’s TensorRT technologies for accelerating machine learning algorithms and trained models for inference.
“It turns out that building a great chip is a nice beginning,” Huang said. “But it’s useless until the world’s developers and users can take advantage of it. That’s why it’s so important for us to work with an ecosystem.
“Nvidia is an open platform company. We create all these libraries in a way so that it’s software-defined and integrateable into a modern system.”
Public cloud partnerships
The product is already being deployed by cloud service providers, including AWS, Google Cloud and Microsoft Azure.
In fact, at the show AWS and Nvidia highlighted the growth of their seven-year partnership as the two vendors announced that AWS will now begin to use Nvidia’s T4 data centre chips.
According to Nvidia, AWS will deploy the T4 Tensor Core GPUs through its Elastic Compute Cloud (EC2) G4 instances starting from April.
The new G4 instances are expected to give AWS customers an efficient platform to deploy a wide range of AI services, with the ability to pair all instances with Nvidia’s GPU acceleration software including CUDA-X AI.
“With our new T4-based G4 instances, we’re making it even easier and more cost-effective for customers to accelerate their machine learning inference and graphics-intensive applications,” Matt Garman, VP of compute services at AWS said in an announcement.
Nvidia’s T4 will also be adopted by Amazon Elastic Container Service for Kubernetes, for customers to deploy in various use cases such as AI inferencing, real-time ray tracing, simulation and more.
The firm also confirms that Google Cloud will adopt the T4 chips in its data centres, with hope that Baidu, Tencent and Alibaba will also adopt the T4 chips in the future.
“In the past, I think enterprises were really turned off by AI because they felt they had to buy a supercomputer and that’s a challenging endeavour. To them it’s a space shuttle,” Ian Buck, VP and GM of accelerated computing at Nvidia told Computerworld UK.
“With the T4, Dell R740 or Cisco UCS - these are the mainstream servers that IT is buying - they have complete access to ITP and all the machine learning offerings without having to change their server.”
Servers like T4 are also available for data scientists to run projects much faster. “They don’t need to build, customise or specialise for different things, certainly some are welcome to, but we wanted to build that universal server to one platform and that’s what CUDA-X is about, that’s what the T4 universal servers are about,” Buck added.
Huang's goal is to get makers and developers to build on the Nvidia platform, and the release of the new Jetson Nano should prove a compelling part of that strategy. The pocket-sized AI computer delivers up to 472 gigaflops of compute performance and retails at just $99. It promises to support all modern AI workloads and consumes only five watts of power, yet it supports the same software that powers some of the fastest supercomputers in the world, Nvidia claims.
Comparing it to the Raspberry Pi, Huang said: “Here’s the amazing thing about this little thing. If you use Raspberry Pi and you just don’t have enough compute performance, you just get yourself one of these, and it runs the entire CUDA-X AI stack.”
Speaking at a press Q & A, Deepu Talla, VP and GM of autonomous machines at Nvidia said: “We have seen tremendous adoption in the industry and also in the developer community for our Jetson line of products.
“We have over 200,000 active developers on Jetson and more than couple of thousand customers that are actually building some sort of products with Jetson and they have been deployed in lots of different vertical markets such as industrial manufacturing robots.”
The Jetson family of products has also been deployed by a number of US and China-based last mile delivery and logistics firms. Its adoption also spreads to Cisco’s WebEx devices for video analytics, Talla explained.
Talking about the multiple features, Talla referred to two primary characteristics the Nano boasts of. One being the different sensors, from radar, to audio or temperature sensors to the high resolution of these sensors.
“If you are doing anomaly detection in a factory line, whether its Food Inspection or PCB inspection, you want to run your neural network on 1080p resolution data, not on a 224 by 224, or 300 by 300 image because by the time you bring it down to respond resolution, you lose a lot of the accuracy,” he explained.
“So you need to process high resolution and also need sensor, we are finding that you run many neural networks for each sensor. For example, if you have an image sensor, you do object detection first and then once you detect objects you're going to classify out so let’s use neural networks. So that's the first characteristic multiple sensors, high resolution sensors, multiple neural networks for a sensor,” Talla added.
Nvidia also announced that its Isaac SDK platform is now widely available for free, offering a robotics developer toolkit packed with Gems capabilities, robot engine and Sim for developer access.
The platform is expected to help manufacturers and startups save time in the development process, making the integration of AI and machine learning a lot easier.
As major IoT devices are expected to be connected to the cloud, Nvidia have made sure that users can do this easily. It announced that AWS’ Greengrass is now available on all Jetson products and RoboMaker can be deployed on Jetson Nano.
In addition, Nvidia also announced that its Deepstream platform for video analytics in now available in Microsoft Azure cloud.
“Heavy duty video analytics is done on premise, on either adjusting based device or a Tesla-based server and then metadata goes into the Azure Cloud and all that has been end to end workflow, and it is made possible now with Microsoft support,” Talla said.
Huang closed his three-hour keynote as it announced that Nvidia has extended its collaborated with the Toyota Research Institute Advanced Development in Japan and the Toyota Research Institute in the US.
Toyota will look to leverage Nvidia's simulation and testing technology for autonomous vehicles as it looks to bring self-driving cars to roads.
The vendor also announced Drive Constellation, a data centre solution which includes two servers, Drive Sim and Drive Constellation Vehicle, which are combined with Nvidia’s GPUs and Drive AGX Pegasus AI technology to enable "bit-accurate, hardware-in-the-loop testing", as Nvidia puts it.
It is designed to provide users with the choice of an augmented or fully-virtual reality view of the surroundings, with the ability to simulate multiple scenarios, such as dangerous situations and routine driving considerations, like speed limits and the identification of warnings.
Read more: In the wake of the Uber accident, Nvidia is turning to simulations to test autonomous cars
Nvidia also expanded on its aim to tackle collisions of self-driving cars with the launch of its Safety Force Field policy, built to protect cars in the likelihood of accidents.
“It turns out that autonomous vehicles is one of the greatest computational challenges,” Huang said. “We have a computational method that detects the surrounding cars and predicts their natural path to avoid traffic.”
“Our company is deep in the middle of the autonomous vehicle revolution, but we’re not building a self-driving car we’re creating a system and infrastructure and the design capability necessary for the whole industry to build self-driving cars,” he concluded.