OpenAI is exploring premium pricing as a replacement for human labor, offering a PhD-level assistant that could replace tasks typically handled by employees, such as paralegals, making the cost comparable to hiring someone.
AI agents could be priced based on labor replacement, outcome-based models, consumption-based models, or traditional SaaS seat subscriptions. Each model has its own benefits and challenges, such as aligning with labor costs, delivering tangible outcomes, or optimizing for usage.
Outcome-based pricing charges customers only when the AI achieves specific, variable outcomes, such as resolved support conversations or upsells. This model aligns costs with tangible business impacts, reducing the risk of paying for unused services.
Companies worry about unpredictable invoices, complex criteria for confirming outcomes, paying for escalations, or being limited to a single pricing model. These concerns highlight the need for transparency and flexibility in pricing structures.
OpenAI needs to triple its revenue to $11.6 billion by the end of next year and reach $100 billion by 2029 to cover escalating training costs. This necessitates exploring premium pricing models and expanding consumer subscriptions to meet financial targets.
Sierra's model represents a shift from traditional SaaS pricing, where customers pay for outcomes rather than usage or seats. This approach aims to provide better value alignment with customers by tying costs to tangible business results.
AI agents could be priced at a discount to reflect their efficiency and scalability, potentially reducing labor costs by 50% or more. However, the exact discount will depend on market competition and the value AI provides relative to human labor.
AI agents could disrupt traditional SaaS models by introducing outcome-based and consumption-based pricing, which align costs more closely with value delivered. This shift could reduce the prevalence of 'shelfware' and create more flexible pricing structures.
Competition among AI agent providers could drive pricing down, especially if companies race to offer the most cost-effective solutions. However, pricing must still reflect the value of labor replacement, balancing affordability with profitability.
Salesforce's AgentForce starts at $2 per conversation, signaling a move toward outcome-based pricing. This model aligns with the growing trend of charging for tangible results rather than fixed SaaS subscriptions.
Thank you.
Hello, friends. Quick note before we dive into today's episode. I am traveling a bit for work today. So today we are just doing a main episode. We will not be doing the headlines. Tomorrow we should be back to normal with our normal types of episodes. This is a really good topic, so I think you're going to enjoy it. Welcome back to the AI Daily Brief. Today we are talking about something really interesting. It's one of the big themes going into 2025 as we think about the business model for AI and what it'll mean for business model disruption in other areas of software.
And the specific genesis of this conversation is a recent interview with OpenAI CFO Sarah Fryer. The topic of conversation was how much companies will pay for AI tools. And this gets at a broader conversation that was summed up by Aaron Levy of Box recently, who said, one of the most fun questions in AI right now will be how AI agents will be priced over time. So let's hear what Sarah Fryer had to say and then come back and put it in a larger context.
So in this recent interview, Fryer was asked about a recent report that OpenAI had considered pricing premium subscriptions to ChatGPT for as much as $2,000 per month. Presumably, this was for a future iteration of the technology, maybe an agentic version, but it still was a big flashy price tag. And what it said clearly to people was that OpenAI was thinking about this as a replacement for people, not just as an augmentation.
When asked about those reports, Fryer said, I want the door open to everything. If it's helping me move around the world with a literal PhD level assistant for anything that I'm doing, there are certain cases where that would make all the sense in the world. And indeed, the logic here is that you're charging based on the value companies get from the technology and that the value is the equivalent of actually hiring someone.
$2,000 a month is a lot if you're comparing it to a chat GPT subscription currently. It's not a lot if you're comparing it to a paralegal that you don't have to hire now. Fryer gets explicit about this. How much you have had to finance that otherwise? Would you have had to go out and hire more people? How do you think about the replacement cost to some degree and how do we create a fair pricing for that?
I recently did an episode about how I think agents and job replacement is all going to play out. In the TLDRs, I think it's going to be a lot about how organizations treat the opportunity. Do they see AI just as a cost-cutting technology where they can have the same outputs for lower inputs? Or are they thinking about how they get a competitive edge and go farther than their competitors by producing way more, adding on way better levels of service, etc.?
I'm not going to get as much into that particular conversation today, although it is notable that yet again, we have another example of how the Overton window is shifting on being okay discussing AI agents as actually job replacing. When it comes to OpenAI itself, the company certainly needs to find a way to boost revenue. During their October fundraising round, they projected a tripling in revenue to $11.6 billion by the end of next year and $100 billion in revenue by 2029. Those figures are what's required to keep up with escalating training costs without needing to upsize their already record fundraising efforts.
Presumably, even price hikes and massive growth in consumer subscriptions won't be enough.
We are starting to also get experiments with premium tiers from OpenAI. Announced last week, their $200 per month ChatGPT Pro offering has seemed to be well-received by hardcore enthusiasts and first adopters, but it's not even intended to see wide-scale adoption. The main drawcard, O1 Pro mode, is designed as a research-grade chatbot with never-before-seen performance on questions that require PhD-level reasoning. The reality is that there are few consumers that need a chatbot with that much power, at least in the way that people think about use cases now.
I'm hesitant to say that that will be the case forever because I think the availability of that level of intelligence will create its own demand, but I think that's going to take a lot of time. And of course, before that's really clear that there's value there, getting people to subscribe at that recurring level is going to be difficult.
The release of Sora certainly brought additional value to the pro tier, although I wouldn't be surprised if we see Sora also become available on its own. There's also the interesting question of what exactly OpenAI is trying to be when it grows up. Professor Ethan Malek wrote, OpenAI has a lot of pieces on the board right now. Multimodal vision and voice, small, large, and reasoning models, image and video creation, code execution, mobile and desktop apps, web search, semi-gentic stuff. Very curious when it will be glued together into a single thing.
Now, of course, the presumption here is that this is all adding up to a whole greater than the sum of the parts, and I do think that that's the case.
Chris Pedrigal, the CEO of Granola, recently made an interesting suggestion in a post on Every where he wrote that there's a gap in the top of the market just waiting to be captured. He wrote, "'As a startup, you can give each of your users a Ferrari-level product experience. Use the most expensive cutting-edge model. Don't worry about optimizing for cost. If doing five additional API calls makes the product experience better, go for it. It may be expensive on a per-user basis, but you probably won't have many users at first. And remember, at best, companies like Google can provide their users with a Honda-level product experience.'"
And the tension here, of course, is how much OpenAI is going for Honda versus Ferrari. But holding aside the OpenAI specific example, I want to come back to this question of what the future business model for agents is going to be. You might have heard some version of this thesis that Y Combinator has been sharing recently, for example, on why vertical AI agents could be 10 times bigger than SaaS.
The argument effectively comes down to the idea that instead of paying for software, people are paying for labor replacement. Ben Lang did a summary of a recent conversation from YC, writing, AI replaces both software and labor costs. Companies spend way more on employees than they do on software.
Smaller companies will be way more efficient and need way less humans. But of course, what follows here is this interesting murky space where companies spend 10 times the amount on labor than they do on software, but it's very unlikely to me to be a one-to-one replacement of current labor costs with new software-based labor costs. One of the big questions, I think, is what the appropriate cost reduction is.
Are AI agents that can replace human tasks going to be 50% of the cost of the equivalent labor? Or are they going to be 1% of the cost of the equivalent labor? And which market forces are going to dictate that? Is competition between agent companies ultimately going to be a race to the bottom, where the cost reduction is massive? These are really big questions. And we're just starting to see how these experiments play out.
Going back to that post from Aaron Levy from Box, again, he started, one of the most fun questions in AI right now will be how AI agents will be priced over time. One approach is to leverage the very clear relationship between AI agents and traditional work, which leads to a pricing model for AI that has agents being priced like labor, but at a discount. An AI agent performs a certain amount of work and you pay for amount of time or units it took to do that work. Given almost any task has some variance, pricing will also vary over time as well. Generally, it's a fair trade for the customer and provider.
As a second approach, there's a very clear benefit of AI agents being priced on a per outcome basis. This model allows for a simple relationship between what the customer needs and what they're paying to get accomplished. It also has the benefit that as underlying AI costs drop over time, service providers can extract more margin for this work. Equally, though, it will mean some customers have varying degrees of profitability. Further, the moment your service offers N types of value props or outcomes, you need N pricing models to go along with it.
A third approach is to price as close to the underlying AI cost as possible, which has the benefit of likely being the lowest cost for a customer. This can be great for technically savvy customers, but has the risk of not being sufficiently abstracted from AI costs to hold value over time. Potentially good for customers, but maybe not for shareholder returns. And finally, there's an approach of maintaining a pure SaaS seat subscription model and offering agents to users that do unlimited work attached to a seat. Depending on the use case and how many seats the customer would need, this model could be quite disruptive.
In areas where there are a lot of seats used by end users, it's possibly very strategic. In areas where there's only a small number of seats, you're likely giving up too much value. In all, lots of different approaches and probably many more than the above. But fairly exciting times to watch new business models and software emerge after a decade plus of limited change.
So that provides an interesting overview of a bunch of different options on how this could play out.
Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.
Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw. Today's episode is brought to you by Superintelligent. Every single business workflow and function is being remade and reimagined with artificial intelligence.
There is a huge challenge, however, of going from the potential of AI to actually capturing that value. And that gap is what Superintelligent is dedicated to filling. Superintelligent accelerates AI adoption and engagement to help teams actually use AI to increase productivity and drive business value. An interactive AI use case registry gives your company full visibility into how people are using artificial intelligence right now.
Pair that with capabilities building content in the form of tutorials, learning paths, and a use case library, and Superintelligent helps people inside your company show how they're getting value out of AI while providing resources for people to put that inspiration into action.
The next three teams that sign up with 100 or more seats are going to get free embedded consulting. That's a process by which our super intelligent team sits with your organization, figures out the specific use cases that matter most to you, and helps actually ensure support for adoption of those use cases to drive real value. Go to besuper.ai to learn more about this AI enablement network. And now back to the show.
However, interestingly, Sierra, which is Brett Taylor, who is the board chairman of OpenAI and a former leader at Meta, among other companies, his new AI agent startup, their team yesterday published a blog post called Outcome-Based Pricing for AI Agents. I'm going to read some excerpts because this is a ground level view from a company that's actually trying to figure this out and has raised a boatload of money to do so.
Elliot Greenwald, who leads GoToMarket at Sierra, writes, In the 80s and 90s, buying software went something like this. You'd go to a store like Fry's Electronics, pick up a shrink-wrapped box filled with floppy disks, or later, a CD-ROM, bring it home and install it. Whether you actually used it or not, you paid for it, and that was that. If you wanted an upgrade, back to the store you went for another box. The internet changed everything, making it possible to sell software differently, as a service. Salesforce pioneered the software-as-a-service-for-SaaS model, and soon companies like Google, Microsoft, and Adobe adopted it as the new industry standard.
SaaS brought numerous benefits. The software was always up to date and you could add or remove seats as needed. However, one pricing challenge remained. Once you bought a seat, you paid for it annually regardless of usage. Unused seats sit idly on your proverbial store shelf, hence the derisive moniker shelfware. A few years later at the infrastructure layer, companies like Amazon with AWS and Snowflake introduced consumption-based pricing where you were charged only for what you used.
Whether paying upfront or as you went, the contract value ultimately depended on actual usage. More compute or bandwidth meant a bigger bill. Today, AI agents executing processes autonomously enable an entirely new pricing model where you pay only when the software achieves specific variable outcomes. In other words, outcome-based pricing.
Like consumption-based pricing, outcome-based pricing varies with usage. However, unlike consumption-based pricing, outcome-based pricing is tied to tangible business impacts, such as a resolved support conversation, a saved cancellation, an upsell, a cross-sell, or any number of variable outcomes. If the conversation is unresolved, in most cases, there's no charge. As companies increasingly rely on AI agents to represent their brands, establishing this presence requires time and intentional effort. During the initial weeks of deploying a Sierra agent, we iterate to drive continuous improvement.
Elliott continues, while nearly everyone likes the idea of outcome-based pricing in principle, many have understandable concerns about what it means for their business and practice. No one wants to face a massive invoice, navigate an inscrutable set of criteria to confirm an outcome, pay for escalations, or be limited to a single pricing model. And again, from here, he basically just talks about what Sierra's answer to that, which is sort of a, this is the best we can do type of answer, where they're trying to minimize those types of surprises.
So basically what you're seeing here is the beginning of an argument for why this sort of outcome-based pricing not only makes sense, but is actually better for the customer. And this is a theme that has been picked up by Salesforce as well. Back in September, the company announced their AgentForce platform, declaring it, quote, what AI was meant to be. And perhaps the most interesting part of the announcement and the thing that people picked most up on was AgentForce's pricing, which starts at $2 per conversation.
I think ultimately when I review all of this, we are very early days. It is very clear that the SaaS model is undergoing some tension. Agents are providing competition, potentially making sense to be priced in a different way. But also the general rise of AI, which increases the capability of enterprises and big customers to roll their own solutions, also creates pressure on the companies to be more accommodating of what the buyer is actually looking for.
This is putting downward pressure on SaaS already. And in addition to these totally novel outcome-based pricing models, you're also just seeing more SaaS companies price in a way that's only for used seats, for example. I think right now the TLDR for me is that everything is up for negotiation.
Startups are going to be experimenting mightily and aggressively with all sorts of different models. And until new norms are figured out, enterprises are going to have a ton of power to push and try to find something that works. Ultimately, whatever the pricing model for agents that are a blend of augmenting and replacing human labor, it's going to have to meet a lot of different criteria.
It's going to have to be cheaper than the equivalent human labor, but it's also going to have to be expensive enough, which presumably means more expensive than the way that we price SaaS right now, to reflect the value that it's actually creating. It's going to have to, on the one hand, be dynamic and flexible and able to accommodate to real-time changes in business situations, while at the same time be predictable enough for big companies to plan around.
It is going to be no mean feat to hit all these different criteria, which is why it's going to be such a fertile time for experimentation. If you are a startup, I think there has never been a better moment to actually think about pricing dynamics as a core competency and try to do something that makes sense while also pushing the model forward. And I think if you're a big company, this is a great time to try to form a thesis for yourselves around how you think software should be priced.
In our experience at Superintelligent, where startups are is that they want to be paid fairly for the value they're actually providing. And on the big company side, they want to pay for value that's actually being provided. They don't want to be locked into gym memberships, basically.
There is actually a lot of common ground in between those two points of view. It's just a matter of figuring out the details. For now, though, that'll be where we wrap this particular AI Daily Brief. It's a conversation that I'm sure we will come back to over and over again. Appreciate you listening or watching, as always. Until next time, peace.