- Nvidia announced that H100 graphics processing units will be available in the market next month.
- H100 delivers the same AI performance with 3.5x more energy efficiency and 3x lower total cost of ownership.
- Partners building systems with H100 GPUs include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo, and Supermicro.
Nvidia CEO Jensen Huang announced at the GTC conference that the Hopper graphics processing unit is currently in volume production. The Hopper GPUs, also known as H100, will be shipped in Dell, Hewlett Packard, and Cisco Systems’ systems next month. Nvidia system with the Hopper GPU will also be available in the first quarter of 2023.
80 billion transistors
H100, initially announced earlier this year, is designed for tasks for the data centers to reduce the cost of deploying AI programs. Alongside the Hopper architecture, new GPUs, built with 80 billion transistors, also benefit from the new Transformer Engine and an Nvidia NVLink interconnect to provide better performance at large AI models. Some of the other key innovations that power the new H100 GPUs are second-generation Multi-Instance GPU, confidential computing, fourth-generation Nvidia NVLink, and DPX Instructions.
H100 allows organizations to reduce the cost of deploying AI. H100 delivers the same AI performance with 3.5x more energy efficiency and 3x lower total cost of ownership while using 5x fewer server nodes than the previous generation. H100-powered systems will be shipped in the coming weeks, with more than 50 different server models by the end of 2022 and more in the first half of 2023. Partners building systems include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo, and Supermicro.

H100 is also coming to the cloud. Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure will be some of the first cloud providers to deploy H100-based instances, starting in 2023. Jensen Huang, founder and CEO of Nvidia said,
« Hopper is the new engine of AI factories, processing and refining mountains of data to train models with trillions of parameters that are used to drive advances in language-based AI, robotics, healthcare and life sciences. Hopper’s Transformer Engine boosts performance up to an order of magnitude, putting large-scale AI and HPC within reach of companies and researchers. »