Jump to content
  • Is AI a fad? 76% of researchers say scaling has "plateaued" — but firms like OpenAI continue splurging billions into a dead end


    Karlston

    • 122 views
    • 4 minutes
     Share


    • 122 views
    • 4 minutes

    Most AI researchers think current AI strategies are "very unlikely" to succeed, and that top AI labs are plunging billions into a dead end for scaling.

    As generative AI becomes more advanced and sophisticated, it's becoming more apparent that the technology requires large investments for rapid progression. For context, OpenAI CEO Sam Altman indicated that it'd "take $7 trillion and many years to build 36 semiconductor plants and additional data centers" to fulfill his ambitious AI vision.

     

    Interestingly, several reports have emerged countering the progressive breakthrough, suggesting AI is a fad. Another report suggested that 30% of AI projects will be abandoned by 2025 after proof of concept.

     

    Despite claims that top AI labs like Anthropic, Google, and OpenAI have hit a scaling wall and are unable to develop advanced AI models, they continue to invest wads of cash into their AI advances. For instance, OpenAI's $500 billion Stargate project designed to facilitate the construction of data centers across the United States for foster its AI advances.

     

    However, as it now seems, even AI researchers have little faith in the current AI strategies and have expressed doubts on their capability to achieve significant milestones such as artificial general intelligence (AGI) or superintelligence.

     

    According to the study spotted by Futurism, 76% of the AI researchers that participated in the study indicated that current AI strategies are "unlikely" or "very unlikely" to succeed.

     

    For context, the study featured 475 AI researchers and was conducted by the Association for the Advancement of Artificial Intelligence scientists. The study seeks to establish the success rate of the current AI approaches based on achievements.

     

    As you may know, a lot of effort goes into scaling AI advances, from data centers to hardware to the resources required to train and run AI models. Top AI labs use AGI as the highest honor in this regard, signifying an AI system that surpasses human cognitive capabilities.

     

    In that regard, top AI labs dig deep into their pockets to edge closer to this feat. However, the companies find themselves between a rock and a hard place, on the verge of bankruptcy.

     

    Last year, OpenAI was on the verge of bankruptcy, with projections of making a $5 billion loss within a year because of its heavy investment in AI scaling. However, top investors, including Microsoft, NVIDIA, SoftBank, and Thrive Capital raised $6.6 billion through a round of funding to keep its business afloat, pushing its market valuation well beyond $157 billion.

    DeepSeek's ultra-cheap model makes difficult to state AI progression's case

    A DeepSeek artificial intelligence logo and icons on various smartphones or laptops.

    DeepSeek ruffled some feathers with its more cost effective AI model.

    (Image credit: Getty Images | Bloomberg)

     

    While speaking to the NewScientist, Stuart Russel, a computer scientist at UC Berkeley indicated:

     

    "The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced. I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued."

     

    Money isn't the only issue riddling AI's progression, energy is a major factor. Did you know that Google and Microsoft alone consume enough energy to power 100 countries?

     

    Elsewhere, the emergence of Chinese startup DeepSeek with its cost-friendly AI model has raised concern among investors and stakeholders. It's model surpasses OpenAI's proprietary AI model at a fraction of the cost, potentially suggesting that money isn't necessarily the decipher key to AI scaling.

     

    OpenAI is also exploring a similar approach with a technique known as text-time compute with its latest models. The approach allows its model to spend more time thinking instead of blurting out a response almost instantly like before.

     

    However, Arvind Narayanan, a computer scientist at Princeton University doesn't think this approach will salvage the situation either. "But this approach is unlikely to be a silver bullet," added Narayanan.

     

    Source


    Hope you enjoyed this news post.

    Thank you for appreciating my time and effort posting news every day for many years.

    News posts... 2023: 5,800+ | 2024: 5,700+ | 2025 (till end of February): 874

    RIP Matrix | Farewell my friend  :sadbye:


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...