Nvidia has made a bunch of new announcements at the Computex 2025 event, which is happening in Taipei, Taiwan, between May 20 to May 23, 2025. CEO Jensen Huang took the stage to unveil what the company has been doing behind the scenes in computer hardware and AI. What stood out, apart from the Blackwell Ultra of course, was the new NVLink Fusion, a fifth-generation custom silicon technology to speed up communication between the different chips that work together for AI processing.

NVLink Fusion silicon
(Credits: Nvidia)
Jensen Huang, founder and CEO of Nvidia, said:
“A tectonic shift is underway: for the first time in decades, data centers must be fundamentally rearchitected — AI is being fused into every computing platform. NVLink Fusion opens NVIDIA’s AI platform and rich ecosystem for partners to build specialized AI infrastructures.”
The NVLink technology isn't entirely new. We've seen the tech being used in Nvidia's GB200 system that combines two Blackwell GPUs with a Grace processor. With this new chip, the end-to-end bandwidth will now be faster.
Major bottlenecks for the inference of AI models are compute, memory, and bandwidth. The new NVLink Fusion tries to solve the bandwidth bottleneck for creating ultra-efficient and highly scaled systems, also called hyperscalers, needed for AI processing.
Nvidia says that its new NVLink Fusion chip is capable of delivering 1.8 TB/s of bidirectional (total upstream and downstream) bandwidth between systems.
NVLink delivers 1.8 TB/s of bidirectional bandwidth per GPU and 14x times the bandwidth of PCIe Gen5, for seamless high-speed communication in most complex large models. It improves throughput and reduces latency by performing in-network compute for collective operations. Every 2x of scale-up in NVLink bandwidth can lead to 1.3-1.4x of rack-level AI performance improvement.
The NVLink Fusion isn't just the silicon, but part of a scalable architecture. Hyperscaled data centers can integrate their semi-custom ASICs with NVLink Fusion and can also be paired with Nvidia's own CPUs, NVLink Switches, ConnectX Ethernet, BlueField Data Processing Units (DPU), and Nvidia's Quantum and Spectrum-X switches as well. The company says that cloud providers can deliver a throughput of up to 800Gb/s per GPU utilizing Nvidia's networking platform.

NVLink Fusion rack scale deployment examples
(Credits: Nvidia)
NVLink Fusion partner ecosystem includes custom silicon designers, CPU, IP, and OEM/ODM partners, providing a full solution for deploying custom silicon with Nvidia at scale and creating what the company calls "AI factories". Once set up, all of the components of its AI factories can be powered by Mission Control, a unified operations and orchestration software that automates much of the management needed for running such complex systems.
The NVLink Fusion silicon and its services are available now with Nvidia along with partner companies like MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence.
Hope you enjoyed this news post.
Thank you for appreciating my time and effort posting news every day for many years.
News posts... 2023: 5,800+ | 2024: 5,700+ | 2025 (till end of April): 1,811
RIP Matrix | Farewell my friend
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.