Intel believes that inference, not learning, is the key to winning the AI war.
Intel CEO Pat Gelsinger criticizes NVIDIA's AI technology, calling it "shallow and small" compared to Intel's own.
NVIDIA focuses on training AI models from scratch, while Intel focuses on adapting existing models to new situations.
Gelsinger believes that the inference market is where the game will be at, and that big players like Google and OpenAI are moving in that direction.
We're still recovering from all the cool Meteor Lake laptop announcements made at Intel's "AI Everywhere" event, but Intel's CEO is still fired up. In a recent statement, Intel CEO Pat Gelsinger launched an attack on NVIDIA's AI technology, calling its field of expertise "shallow and small" compared to his own.
Intel's big swing at NVIDIA's AI technology
As reported by Tom's Hardware, Pat Gelsinger made a statement at NASDAQ criticizing his technological rival's methodology in the AI market. Intel and NVIDIA have been locked in an AI arms war for a while now, as both try to produce the hardware needed to power a new tech scene driven by artificial intelligence.
Right now, both sides are taking different approaches to AI processing. NVIDIA's CUDA focuses more on training artificial intelligence from the ground up, giving models data and letting it learn from what it's given. It's the AI equivalent of someone going through education to get the degrees they need to do a job right.
Intel, however, focuses more on "inference." This is when an existing AI model adapts and learns from a situation it has never seen before. This is the AI equivalent of taking someone who has the degrees needed to do a job, giving them a relevant task where they have zero experience, and monitoring how they apply their knowledge to the situation.
As both sides race to become the leader in AI processing, Pat Gelsinger believes that NVIDIA's approach is short-lived:
"We think of the CUDA moat as shallow and small. Because the industry is motivated to bring a broader set of technologies for broad training, innovation, data science, et cetera."[...]"As inferencing occurs, hey, once you've trained the model… There is no CUDA dependency, it's all about, can you run that model well?"
Pat Gelsinger continued, stating that that Intel will still compete on the training level, but believes that "fundamentally, the inference market is where the game will be at." He also claims that "the entire industry is motivated to eliminate the CUDA market," stating that big players in the AI market like Google and OpenAI are moving toward inference.
Regardless of which side will win the AI war, it's going to be a wild few months as these two tech giants clash. We'll have to wait and see which strategy will outlive the other.
- Adenman
- 1
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.