OpenAI could be in a position to loss-lead until its competitors wither away.
OpenAI just announced pricing for businesses seeking to integrate its ChatGPT service into their own products, and it looks an awful lot like a 90 percent off sale.
It all starts with OpenAI, a former nonprofit that’s now gunning for riches as lustily as any Silicon Valley unicorn. The company has built a dazzling array of products, including the DALL-E image generator and the renowned ChatGPT service.
ChatGPT is powered by a system known as a large language model (or LLM), and it’s one of several LLM lines that OpenAI sells commercially. Buyers of LLM output are mostly companies that integrate language-related services like chat, composition, summarization, software generation, online search, sentiment analysis, and much more into their websites, services, and products.
For instance, dozens of startups use LLMs to help their own customers churn out corporate blogs, marketing emails, and click-seeking SEO articles with radically less effort than before. This may add little to our cultural heritage but quite a bit to the bottom lines of these companies’ cannier users. As I write this, at least 65 different startups are peddling these sorts of copywriting services, with new ones launching weekly.
Most are basically thin wrappers seeking to arbitrage LLM pricing, with virtually no differentiation or competitive moat. Some of them—most notably Jasper, which recently raised money at a $1.5 billion valuation—are building value-added services around the LLM output they retrieve and reformat, which may let them differentiate and prosper over time.
OpenAI is currently the biggest LLM provider, though there is growing competition from Cohere, AI21, Anthropic, Hugging Face, and others. These companies generally sell their output on a “per-token” basis, with a token representing approximately three-quarters of a word.
To give you an example, the baseline price of a very powerful OpenAI model called Davinci is 2 cents for a thousand tokens—which is to say about 750 words. This means it would cost Jasper about 8 cents to get Davinci to write six 500-word blog posts for you.
Prosperity for Jasper lies in pricing plans and subscription models that let the company charge you more than 8 cents for those posts. Prosperity for you lies in getting more value from those posts in the form of clicks, customer leads, sales, and donations than you paid Jasper to pay Davinci to write them. For good or ill, this dynamic will plainly lead to a tsunami of new “content” flooding the Internet in very short order.
Last week’s pricing surprise was that OpenAI is selling ChatGPT at one-tenth the price of Davinci. In other words, if you’re skilled enough to prompt ChatGPT into creating a 7,500-word essay for you across several rounds of “conversation,” that 30-page treatise will be yours for a mere 2 cents.
This is astonishing because in many use cases, ChatGPT is likely to generate equal—or possibly far better—output to Davinci. Even in tech, it’s not every day that a company releases a beefy competitor to its own cash cow at 10 percent of the price, particularly when it hasn’t been goaded into dropping prices by competitive pressure.
But the reality is a bit more complex. For one thing, Davinci is one of four LLMs that OpenAI has been selling for a while based on its underlying GPT-3 model. Davinci's much faster and cheaper (but far less powerful) sibling Ada sells for one-fiftieth the price of Davinci—which is to say one-fifth the price of ChatGPT. As I write this, pricing for Davinci, Ada, and the intermediate Babbage and Curie models hasn’t changed since ChatGPT debuted at its bargain-basement rate.
Also, some developers I’ve spoken with think that Davinci will remain a better choice for their products. For instance, its output tends to be more concise and less meandering than ChatGPT’s. Davinci also performs better with certain categories of prompt.
But ChatGPT outperforms Davinci in key areas, including math, sentiment analysis, and a very common category of prompt known as “zero-shot.” Combine this with a radical cost advantage and most Davinci users are likely to stampede to ChatGPT in the coming weeks.
But while the new pricing structure is great news for Davinci customers, it’s potentially devastating for OpenAI’s competitors, who had been pricing their own top-tier models against something that’s now ten times more expensive than the price leader.
An interesting question to ponder in all this is whether OpenAI will make or lose money from ChatGPT—or from any of its models, as their prices inevitably follow ChatGPT off the cliff. These are enormous models, after all, which makes them very expensive to query.
But there are also many ways to optimize these models that the industry as a whole is just starting to crack. For instance, one researcher I spoke to thinks OpenAI may have significantly pruned the model underpinning ChatGPT by carving out parameters that were rarely activated in the countless ChatGPT queries that the world has placed since the service debuted in November. You could then retrain the lighter model to predict how its chunkier predecessor would respond to sample queries, resulting in a much smaller and cheaper model yielding roughly the same output.
So did OpenAI slash prices because it came up with a way to cleverly cut costs? Or is it slashing prices because it can? None of its competitors are starved for cash. For instance, Anthropic just bagged $300 million in funding from Facebook, and Cohere raised $170 million to fuel its LLM ambitions. But these are pittances compared to the $10 billion that Microsoft has committed to OpenAI. A lot of that value will come in the form of compute power in Microsoft’s Azure cloud service. Since an LLM’s variable costs lie entirely in compute cycles, OpenAI could be in a position to loss-lead terraword after terraword until all of its less-gilded competitors wither away in the price collapse.
I have no idea if this is OpenAI’s playbook. But this specific form of hardball does have a long history at its partner Microsoft.
Rob Reid is a venture capitalist, New York Times-bestselling science fiction author, deep-science podcaster, and essayist. His areas of focus are pandemic resilience, climate change, energy security, food security, and generative AI. He can be found at his new Substack, robreid.substack.com.
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.