The FTC, fresh off announcing a whole new division taking on “snake oil” in tech, has sent another shot across the bows of the over-eager industry with a sassy warning to “keep your AI claims in check.”
I wrote a little while ago (okay, five years) that “AI Powered” is the meaningless tech equivalent of “all natural,” but it has progressed beyond cheeky. It seems like just about every product out there claims to implement AI in some way or another, yet few go into detail — and fewer still can tell you exactly how it works and why.
The FTC doesn’t like it. Whatever someone means when they say “powered by artificial intelligence” or some version thereof, “One thing is for sure: it’s a marketing term,” the agency writes. “And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.”
Everyone is saying AI is reinventing everything, but it’s one thing to do that at a TED talk; it’s quite another to claim it as an official part of your product. And the FTC wants marketers to know that these claims may count as “false or unsubstantiated,” something the agency is very experienced with regulating.
So if your product uses AI or your marketing team claims it does, the FTC asks you to consider:
- Are you exaggerating what your AI product can do? If you’re making science fiction claims that the product can’t back up — like reading emotions, enhancing productivity or predicting behavior — you may want to tone those down.
- Are you promising that your AI product does something better than a non-AI product? Sure, you can make those weird claims like “4 out of 5 dentists prefer” your AI-powered toothbrush, but you’d better have all 4 of them on the record. Claiming superiority because of your AI needs proof, “and if such proof is impossible to get, then don’t make the claim.”
- Are you aware of the risks? “Reasonably foreseeable risks and impact” sounds a bit hazy, but your lawyers can help you understand why you shouldn’t push the envelope here. If your product doesn’t work if certain people use it because you didn’t even try, or its results are biased because your dataset was poorly constructed… you’re gonna have a bad time. “And you can’t say you’re not responsible because that technology is a ‘black box’ you can’t understand or didn’t know how to test,” the FTC adds. If you don’t understand it and can’t test it, why are you offering it, let alone advertising it?
- Does the product actually use AI at all? As I pointed out long ago, claims that something is “AI-powered” because one engineer used an ML-based tool to optimize a curve or something doesn’t mean your product uses AI, yet plenty seem to think that a drop of AI means the whole bucket is full of it. The FTC thinks otherwise.
“You don’t need a machine to predict what the FTC might do when those claims are unsupported,” it concludes, ominously.
Since the agency already put out some common-sense guidelines for AI claims back in 2021 (there were a lot of “detect and predict COVID” ones then), it directs questions to that document, which includes citations and precedents.
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.