Jump to content
  • Nvidia will now make new AI chips every year


    Karlston

    • 490 views
    • 3 minutes
     Share


    • 490 views
    • 3 minutes

    ‘We’re on a one-year rhythm.’

    Nvidia just made $14 billion worth of profit in a single quarter thanks to AI chips, and it’s hitting the gas from here on out: Nvidia will now design new chips every year instead of once every two years, according to Nvidia CEO Jensen Huang.

     

    “I can announce that after Blackwell, there’s another chip. We’re on a one-year rhythm,” Huang just said on the company’s Q1 2025 earnings call.

     

    Until now, Nvidia’s produced a new architecture roughly once every two years — revealing Ampere in 2020, Hopper in 2022, and Blackwell in 2024, for example.

     

    (The industry darling H100 AI chip was Hopper, and the B200 is Blackwell, though those same architectures are used in gaming and creator GPUs as well.)

     

    But analyst Ming-Chi Kuo reported earlier this month that the next architecture “Rubin” is coming in 2025, giving us an R100 AI GPU as soon as next year, and Huang’s comments suggest that report might be on the money.

     

    Huang says Nvidia will accelerate every other kind of chip it makes to match that cadence, too. “We’re going to take them all forward at a very fast clip.”

     

    “New CPUs, new GPUs, new networking NICs, new switches... a mountain of chips are coming,” he says.

     

    Earlier on the call, when an analyst asked him to explain how the recent Blackwell GPUs would ramp while Hopper GPUs are still selling well, Huang explained that Nvidia’s new generations of AI GPUs are electrically and mechanically backward-compatible and run the same software. Customers will “easily transition from H100 to H200 to B100” in their existing data centers, he says.

     

    Huang also shared a couple of his sales pitches on the call by way of explaining the incredible demand for Nvidia’s AI GPUs:

     

    We expect demand to outstrip supply for some time, as we transition to H200, as we transition to Blackwell. Everyone’s anxious to get their infrastructure online. And the reason for that is because they’re saving money and making money and they would like to do that as soon as possible.

    He also has a FOMO argument that made me smile:

     

    The next company who reaches the next major plateau gets to announce a groundbreaking AI, and the second one after that gets to announce something that’s 0.3 percent better. Do you want to be the company delivering groundbreaking AI, or the company, you know, delivering 0.3 percent better?

    Nvidia’s CFO interestingly says that automotive will be its “largest enterprise vertical within data center this year,” pointing to how Tesla purchased 35,000 H100 GPUs to train its “full-self driving” system, while “consumer internet companies” like Meta will continue to be a “strong growth vertical,” too.

     

    Some customers have purchased or plan to purchase over 100,000 of Nvidia’s H100 GPUs — Meta plans to have over 350,000 of them in operation by the end of the year.

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...