Nvidia has unveiled its latest innovation, the GH200 Grace Hopper Superchip, which is a two-chip system consisting of central and graphic processors. The platform is specifically designed for building LAS systems and AI platforms. The main component of the GH200 Grace Hopper Superchip is the NVIDIA GRACE superchip, featuring high-performance HBM3E memory with a data access speed of up to 5 TB/s.
According to representatives from Nvidia, the new platform surpasses the performance of its predecessor significantly. A single server based on the GH200 Grace Hopper platform, equipped with two new superchips (thus having two GPU+CPU pairs), offers 144 ARM Neoverse cores working in tandem with 282 GB of high-performance HBM3E memory. Compared to the previous version, this configuration provides 3.5 times more memory and 3 times more bandwidth. The overall performance of a platform with two superchips reaches an impressive 8 petaflops.
Nvidia’s CEO, Jensen Juang, emphasizes that the GH200 Grace Hopper Superchip platform is specifically designed to meet the increasing demand for generative artificial intelligence. The platform allows multiple graphic processors to be combined, thereby boosting productivity and facilitating the creation of scalable server systems for data processing centers.
The Grace Hopper superchip, which forms the foundation of the GH200 Grace Hopper Superchip platform, can be combined with other superchips using Nvidia’s NVLINK communication technology. This enables the graphic processor to access the central processor’s memory, providing up to 1.2 TB of memory in a configuration with two superchips.
The HBM3E memory used in the GH200 Grace Hopper Superchip surpasses HBM3 in terms of performance, offering a 50% increase. In systems with multiple GH200 Grace Hopper Superchip platforms, the total memory capacity reaches 10 TB/s. This allows for processing larger AI models and significantly enhances overall performance.
Nvidia highlights that leading manufacturers have already started offering systems built on the Grace Hopper platform in response to the growing demand. The new generation of the Grace Hopper Superchip platform with HBM3E memory is fully compatible with the specifications of the Nvidia MGX server architecture, which was introduced at Computex 2023. This compatibility ensures seamless integration of the Grace Hopper into over 100 servers available in the market.
The implementation of the new Nvidia platform is scheduled to begin in the second quarter of 2024.