Conditions Nvidia’s long-awaited Hopper H100 accelerators will begin shipping in OEM-built HGX systems late next month, the silicon giant said at its GPU Technology Conference (GTC) today.
However, those waiting to get their hands on Nvidia’s DGX H100 systems will have to wait until sometime in the first quarter of next year. DGX is Nvidia’s range of workstations and servers using its GPUs and connections, and HGX systems are servers manufactured by partners, which in turn use Nv’s technology.
And while Nvidia is touting its Hopper architecture in the data center, most of the enterprise kits announced this week won’t be getting the chip giant’s flagship architecture any time soon.
On the sidelines, Nvidia seems content to get full life out of its Ampere architecture.
Today, Nvidia detailed the next-gen edge AI and robotics platform it’s calling IGX.
IGX is an “all-in-one computing platform for accelerating the deployment of smart machines and medical devices in real time,” said Kimberly Powell, Veep of Healthcare at Nvidia. At its core, the system is essentially an enhanced version of Nvidia’s previously announced Jetson AGX Orin module, it announced spring.
“IGX is a complete system with Nvidia Orin robotic processor, Ampere Tensor Core GPU, ConnectX streaming I/O processor, functional safety island and safety microcontroller unit as more and more robots and humans work in the same environment will,” she added.
In terms of performance, there is not much new here. We’re told the platform will be based on an Orin industrial System-on-Module with 64GB of memory, which is comparable in performance to the AGX Orin module introduced earlier this year. This system featured 32GB of memory, an octa-core Arm Cortex-A78AE CPU, and an Ampere-based GPU.
The IGX gets an integrated ConnectX-7 NIC for high-speed connections via two 200 Gbps interfaces. The board also appears to have a full complement of M.2 storage, PCIe slots, and at least one legacy PCI slot for expansion.
Nvidia’s IGX platform is targeting a variety of edge AI and robotics use cases in healthcare, manufacturing, and logistics where confidentiality or latency make centralized systems impractical.
Like the AGX Orin, the system is complemented by Nvidia’s AI Enterprise software suite and the Fleet Command platform for deployment and management.
One of the first applications of the IGX platform will use Nvidia’s robotic imaging platform.
“Nvidia Clara Holoscan is our application framework that sits on top of IGX for medical devices and imaging robotics pipelines,” said Powell.
Three medical device vendors – Activ Surgical, Moon Surgical and Proximinie – plan to use IGX and Clara Holoscan to power their surgical robotics and telepresence platforms. IGX Orin Developer Kits are scheduled to ship early next year with production systems from ADLink, Advantech, Dedicated Computing, Kontron, MBX and Onyx to name a few.
On the subject of Orin, Nvidia also presented its Jetson Orin Nano compute modules. Orin Nano will be available in two configurations at launch, including an 8GB version with 40 TOPS AI inference and a stripped down version with 4GB memory with 20 TOPS.
Like previous Jetson modules, the Orin Nano uses a pin-compatible edge connector reminiscent of that used for laptop SODIMM memory, and consumes between 5W and 15W depending on the application and SKU. Nvidia’s Jetson Orin Nano modules are due January starting at $199.
An OVX update
Nvidia’s OVX servers, which are designed to run the Omniverse platform, will not run on Hopper either.
The company’s second-generation visualization and digital twinning systems instead feature eight L40 GPUs. The cards are based on the company’s next-gen Ada Lovelace architecture and feature Nvidia’s third-gen ray tracing cores and fourth-gen Tensor cores.
The GPUs are accompanied by a pair of Ice Lake Intel Xeon Platinum 8362 CPUs for a total of 128 processor threads clocked at up to 3.6GHz.
The computing system is accompanied by three ConnectX 7 NICs, each with 400 Gbps throughput and 16 TB of NVMe storage. While the system will be available as single nodes, Nvidia envisions the system being deployed as part of what it calls an OVX SuperPod, which includes 32 systems connected via the company’s 51.2 Tbps Spectrum 3 switches are.
The second generation systems will be available from Lenovo, Supermicro and Inspur from 2023. Nvidia plans to extend the availability of the systems to other partners in the future.
Jumped on Drive Thor
The only piece of kit announced at GTC this week that gets Nvidia’s Hopper architecture is the Drive Thor autonomous vehicle computing system.
Drive Thor replaces Nvidia’s Atlan platform on its 2025 roadmap and promises to deliver 2,000 TOPS of inference power at launch.
Nvidia’s Drive Thor autonomous vehicle computer promises a performance of 2,000 TOPS when it is launched in 2025
“Drive Thor is packed with cutting-edge features introduced in our Grace CPU, Hopper GPU and next-gen GPU architecture,” said Danny Shapiro, Nvidia’s VP of Automotive, at a press conference. He said Drive Thor is designed to unify the litany of computer systems that power modern cars into a single centralized platform.
“Look at today’s advanced driver assistance systems — parking, driver monitoring, camera mirrors, digital instrument cluster, infotainment systems — they’re all on different computers spread across the vehicle,” he said. “In 2025, however, these functions will no longer be separate computers. Rather, Drive Thor will allow manufacturers to efficiently consolidate these functions into a single system.”
To deal with all the information streaming from automotive sensors, the chip features multi-compute domain isolation, which Nvidia says allows the chip to run critical processes simultaneously without interruption.
The technology also allows the chip to run multiple operating systems at the same time to cater to different vehicle applications. For example, the car’s core operating system could run on Linux, while the infotainment system could run on QNX or Android.
However, it is not known when we will be able to see the technology in action. Currently, all three of Nvidia’s launch partners – Zeekr, Xpeng and QCraft – are based in China. ®
https://www.theregister.com/2022/09/20/nvidia_robots_auto_omniverse/ Nvidia Announces Robotic, Auto and Omniverse Technologies • The Register