Most AI researchers think current AI strategies are "very unlikely" to succeed, and that top AI labs are plunging billions into a dead end for scaling.
As generative AI becomes more advanced and sophisticated, it's becoming more apparent that the technology requires large investments for rapid progression. For context, OpenAI CEO Sam Altman indicated that it'd "take $7 trillion and many years to build 36 semiconductor plants and additional data centers" to fulfill his ambitious AI vision.
However, as it now seems, even AI researchers have little faith in the current AI strategies and have expressed doubts on their capability to achieve significant milestones such as artificial general intelligence (AGI) or superintelligence.
According to the study spotted by Futurism, 76% of the AI researchers that participated in the study indicated that current AI strategies are "unlikely" or "very unlikely" to succeed.
For context, the study featured 475 AI researchers and was conducted by the Association for the Advancement of Artificial Intelligence scientists. The study seeks to establish the success rate of the current AI approaches based on achievements.
As you may know, a lot of effort goes into scaling AI advances, from data centers to hardware to the resources required to train and run AI models. Top AI labs use AGI as the highest honor in this regard, signifying an AI system that surpasses human cognitive capabilities.
In that regard, top AI labs dig deep into their pockets to edge closer to this feat. However, the companies find themselves between a rock and a hard place, on the verge of bankruptcy.
DeepSeek's ultra-cheap model makes difficult to state AI progression's case
DeepSeek ruffled some feathers with its more cost effective AI model.
(Image credit: Getty Images | Bloomberg)
While speaking to the NewScientist, Stuart Russel, a computer scientist at UC Berkeley indicated:
"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced. I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued."
Elsewhere, the emergence of Chinese startup DeepSeek with its cost-friendly AI model has raised concern among investors and stakeholders. It's model surpasses OpenAI's proprietary AI model at a fraction of the cost, potentially suggesting that money isn't necessarily the decipher key to AI scaling.
OpenAI is also exploring a similar approach with a technique known as text-time compute with its latest models. The approach allows its model to spend more time thinking instead of blurting out a response almost instantly like before.
However, Arvind Narayanan, a computer scientist at Princeton University doesn't think this approach will salvage the situation either. "But this approach is unlikely to be a silver bullet," added Narayanan.
You can post now and register later.
If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.