NVIDIA has announced a new AI computing platform that the company claims will make the factory production of fully autonomous vehicles possible.
Codenamed Pegasus, the system compresses four AI processors into a device small enough to carry, which the company says provides for all the computational requirements of a self-driving car. NVIDIA plans to initially harness this power to create a new class of automated taxis.
"The Pegasus is the world’s first computer designed for production deployment of robotaxis," NVIDIA's founder and CEO Jensen Huang said at GTC Europe 2017 in Munich, the home of BMW.
Driverless cars are normally categorised by one of five levels of sophistication.
Level-zero vehicles rely on the full human control that’s been used since cars first became available to the public more than a century ago. Level one adds automation to a specific function that still requires the support of someone behind the wheel, such as acceleration. Level two extends this to a single function that needs no human support, such as lane-centering.
Level three means that the system can monitor the surrounding environment and make certain decisions entirely independently, for example, when to overtake. Level-four vehicles are capable of completing a specific driving task.
But level five is the ultimate goal of a fully autonomous vehicle without steering wheels, pedals or mirrors. Nvidia claims Pegasus will make this objective a reality.
These vehicles require enormous computational power.
A vast number of radars, lidars, sensors and high-resolution, 360-degree surround cameras are needed to accurately track surroundings, localise the vehicle and plan and control the journey. They generate a quantity of information that’s equivalent to a small data centre.
Typically this means an autonomous vehicle has a boot stacked to the brim with racks of computers containing server-class GPUs that run algorithms to guide them on their journeys. Pegasus shrinks all this equipment into a device the size of a licence plate.
The system delivers computing power of 320 trillion operations per second, the equivalent of around 100-server data centres and more than 10 times that of its predecessor, the Nvidia Drive PX, in a system that's compact enough to make mass production feasible.
"Basically we’re packing a supercomputing data centre into your trunk," said Huang, clad in the trademark black leather jacket that he's said to change on an annual basis.
When will Pegasus be on the road?
Nvidia plans to become the leading provider of a complete driving platform for the autonomous vehicle industry. This platform will consist of Pegasus providing the computational performance, Nvidia's DriveOS operating system processing the information and its Driveworks SDK allowing car companies to integrate their own autonomous vehicle software.
The company claims that its end-to-end package will power every aspect of the driving process, from automatically adjusting the driver’s seat as they approach a car to dropping them off at home.
This is a long-term plan for a product line that’s still years from mass production. Even when the technology is ready to hit the road, governments and regulatory bodies will have their own requirements that the manufacturers will need to fulfill.
Pegasus will be available to Nvidia's automotive partners including Mercedes manufacturer Daimler AG in the second half of 2018. Dan Shapiro, Nvidia's senior director of automotive, said he expects public trials of Pegasus-powered vehicles to take place in 2019.
Nvidia is currently working with more than 225 car and tech companies using Drive PX architecture, 25 of which are developing driverless 'robotaxis'.
The company announced that Deutsche Post DHL Group, the world’s largest mail and logistics company, will deploy a test fleet of autonomous delivery trucks in 2018 using Drive PX. The service will cover the entire transportation chain, including the "last mile" of deliveries, the most complex and costliest aspect of the process.
Nvidia predicts that real-life uses will initially be in closed environments such as universities and corporate campuses, before the vehicles hit motorways and then city streets around the world.
"Ultimately I think we'll see these vehicles start to become fully autonomous, where you can then have 24/7 package delivery," said Shapiro. "You order something online, and you arrange delivery when and where you want. It will then come to your house at nine o'clock at night after you come home for dinner, then you'll get a text alert saying the package is there, you'll go out, you’ll have a PIN and you’ll open up a compartment and remove your package."
Driverless cars will inevitably lead to massive jobs losses for delivery couriers, cabbies, and long-haul truckers, but Nvidia predictably prefers to emphasise the positives: free time for drivers on their commutes, lives saved by vehicles that are never tired, impaired or distracted, reduced congestion and extra land that would otherwise be used for car parks.
"Driverless cars will enable new ride- and car-sharing services," predicted Huang. "New types of cars will be invented, resembling offices, living rooms or hotel rooms on wheels. Travelers will simply order up the type of vehicle they want, based on their destination and activities planned along the way. The future of society will be reshaped."
His grand ambitions for Nvidia were bolstered by the performance of its stock in the aftermath of his announcement. Shares in the company rose by two percent in midday trading to reach an all-time high.
Holodeck has arrived
To help developers of autonomous vehicles design and manufacture their products, Nvidia has also opened early applications for Holodeck, its virtual reality design lab. The platform provides an interactive environment in which designers and developers can collaborate in real-time on photorealistic models of life-sized vehicles, with AI-powered simulation tools that allow intelligence to be transferred between the real and virtual worlds.
"We think of this as the design lab of the future," said David Weinstein, NVIDIA’s director of professional virtual reality. "It obeys the laws of physics, such as weight and touch and gravity."
Car companies can import models of their vehicle straight from their computer-aided design (CAD) library. Users can then touch and interact with everything in the environment while observing subtle social cues from their collaborators such as eye contact and gestures. They can use a geometry clipping tool allows them to open up the car’s engineering from outside, or sit in the driver's seat and touch the steering wheel with haptic feedback devices.
The car designers may soon share the concerns over automation held by drivers. Nvidia has developed a virtual robot training centre called Project Isaac that can teach machines to do their jobs both virtually and physically.
"We can train them in virtual reality, we can watch them perform in VR, and once they’re performing great we can take their neural nets - their virtual brains - put them in real robots, and then that robot is ready to perform in the real world," said Weinstein.
CEO Huang envisions Nvidia soon providing a level of automation that could provoke either excitement or terror: "We believe that we need to improve performance by so much more, and the reason for that is simply this: we want to make it possible for the AI to learn how to write software itself, [and] to have AI create AI itself."