Google wants to prove that it can make a supercomputer that can handle the growing number of generative AI applications. Today, Google researchers published a paper (via CNBC) that says its TPU v4 supercomputer is "1.2x–1.7x faster and uses 1.3x–1.9x less power" compared to systems that used NVIDIA's A100 chip.
Google is trying to compete with NVIDIA in this market, where it is playing catch up. NVIDIA currently dominates in this sector with over 90 percent of AI-based development using its chips. Google wants companies to use supercomputers that use its Tensor Processing Units (TPU) instead. In fact, Google said that Midjourney, the popular AI-generated image maker, uses TPU-based chips.
While the paper compares the TPU v4 to the A100, Google said it did not compare the supercomputer to ones that use NVIDIA's H100 chips. These are the same processors that Microsoft is using for its own AI applications, including Bing Chat and Microsoft 365 Copilot.
It's clear that Google feels that AI and the supercomputers that power those applications are the future of computing and technology. However, even with this new TPU v4 setup, it will likely struggle to compete with NVIDIA's lead in this matter.