Thursday, December 26, 2024

Next-Generation GH200 Grace Hopper Superchip Platform

- Advertisement -

The world’s first HBM3e processor delivers unparalleled memory and bandwidth, facilitates the interconnection of multiple GPUs for peak performance, and features an easily scalable server architecture.

nvidia superchip

NVIDIA has announced the next-generation NVIDIA GH200 Grace Hopper platform, powered by the new Grace Hopper Superchip with the world’s first HBM3e processor. The company claims the platform is built for the emerging demands of accelerated computing and generative Artificial Intelligence (AI). This platform offers a variety of configurations designed to address the intricate requirements of advanced generative AI tasks, from extensive language models to recommendation systems and vector databases. The dual setup provides a staggering 3.5x more memory capacity and triple the bandwidth compared to its predecessor. This configuration encompasses a server with 144 Arm Neoverse cores, eight petaflops of AI processing prowess, and a generous 282GB of state-of-the-art HBM3e memory.

The latest platform harnesses the power of the Grace Hopper Superchip. With NVIDIA NVLink, multiple Superchips can interconnect, seamlessly collaborating to execute massive models essential for generative AI. This technology ensures the Graphics Processing Unit (GPU) can fully tap into the Central Processing Unit (CPU) memory, offering an impressive 1.2TB of swift memory in a dual setup. With the HBM3e memory clocking in at a speed 50% greater than its HBM3 counterpart, the platform achieves an astounding combined bandwidth of 10TB/sec. Consequently, this platform can operate models that are 3.5 times larger than before, coupled with a memory bandwidth that’s thrice as fast, enhancing overall performance.

- Advertisement -

“To meet surging demand for generative AI, data centres require accelerated computing platforms with specialised needs,” said Jensen Huang, founder and CEO of NVIDIA. “The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data centre.”

For more information, click here.

Nidhi Agarwal
Nidhi Agarwal
Nidhi Agarwal is a journalist at EFY. She is an Electronics and Communication Engineer with over five years of academic experience. Her expertise lies in working with development boards and IoT cloud. She enjoys writing as it enables her to share her knowledge and insights related to electronics, with like-minded techies.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics