This year at CES was the year AI took over. From large language model-powered voice assistants in cars to the Rabbit R1, the technology you heard about everywhere was AI. It was a little too much.
This year at CES was the year AI took over. From large language model-powered voice assistants in cars to the Rabbit R1, the technology you heard about everywhere was AI. It was a little too much.
It may be the year of AI at CES, but many of these “AI” features have been around for a while — it’s just that companies are only now embracing the branding of artificial intelligence. AI has entered the public consciousness: it’s cool and hip to place it front and center in a product, a sign that companies are ambitious and forward thinking. That’s led the term to be adopted wherever possible, even when it’s not strictly the AI most people know.
But as more companies rebrand anything involving algorithms as AI, how are we meant to separate the chaff from the wheat? And more importantly, wouldn’t this lead to overpromising what AI can do?
Whether new products use generative AI or not, slapping the label AI onto something gives the impression that a feature is new and exciting. Generative AI is also still in the throw-it-at-everything phase of growth. People want to figure out how far they can take the technology and want to believe it will be the big differentiator. This is why we’re seeing everything from Walmart using AI models to restock your pantry to car companies cramming ChatGPT into their dashboard to give drivers something to talk to.
Arun Chandrasekaran, an analyst at Gartner, said this is normal for many companies, but it does run the risk of overpromising to consumers when they find out something marked as AI isn’t actually like ChatGPT.
“There is a conflation now of generative AI and other AI that could muddle the field a little bit,” Chandrasekaran said. “Marketers might be shooting themselves in the foot when they advertise something that ends up not being what people expected.”
For better or worse, most people believe that AI is synonymous with generative AI — more specifically, ChatGPT. This creates an impression that if a consumer uses a product branded as AI, they expect it to behave the same way as a chatbot that “thinks” like a human.
This is a disservice to products that use other forms of AI that are equally impressive. Many of the robots roaming around CES, like Samsung’s Ballie or LG’s AI agent robot thing (it’s not strictly an AI agent; AI agents refer to AI software, usually a chatbot of sorts, that can do tasks such as book a flight or find a table at a restaurant), are cute and feats of engineering. But their existence has more to do with advancements in robotics and even computer vision than the rise of LLMs. (At least we don’t know if Samsung used LLMs to help train Ballie).
And then there’s machine learning. AI experts will argue that generative AI and the foundation models that power many versions of it are merely the next stage of development for machine learning. But no one wants to talk about machine learning anymore. It’s considered old and “traditional,” and yet I’m sure it’s what’s powering so many of the pattern recognition features at CES.
“Technology passes through lifecycles, and yes, we may get to the point in AI that people are disillusioned with its promise after not seeing it solve many of the problems people think it will solve. But that’s when you see many good innovations and better fitting use cases come out,” said Chandrasekaran.
In the next few years, we’re going to see features and products that don’t need a chatbot or a powerful large language model. It’s just not at this CES. Not yet.
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.