AI agents offer higher personalization, flexibility, and the ability to handle complex workflows, which improves customer satisfaction and resolves more inquiries compared to traditional chatbots or decision trees.
Chatbots rely on predefined decision trees and simple NLP, often leading to frustrating experiences. AI agents, on the other hand, use LLMs to handle complex inquiries, adapt to different situations, and provide personalized support by chaining multiple LLM calls and integrating business logic.
Per-conversation pricing offers simplicity and predictability, as defining what constitutes a resolution can be ambiguous and lead to misaligned incentives. Per-resolution pricing could encourage deflecting difficult cases, which customers dislike.
Incumbents struggle because AI agents cannibalize their traditional seat-based pricing models. They also have less risk tolerance due to their large customer base, making it harder for them to iterate quickly and improve products compared to startups.
AI supervisors will need skills in observability (understanding how AI makes decisions) and decision-making (providing feedback and building new logic). They will also need to monitor AI performance and ensure it aligns with business goals.
AI agents use deterministic APIs for sensitive tasks, reducing the risk of non-deterministic outputs. Enterprises often conduct red teaming to stress-test the system, ensuring it can handle potential attacks or misuse.
Personalization involves tailoring responses to both the user and the specific business logic of the customer. This requires context about the user and access to business systems, enabling the agent to provide a more accurate and relevant experience.
Customer support has quantifiable ROI (e.g., percentage of inquiries resolved) and allows for incremental adoption, meaning agents don’t need to be perfect from the start. This makes it easier for businesses to adopt and scale AI solutions.
Voice agents require lower latency and more natural interaction, which makes them technically more challenging to implement than text-based agents. They also need to handle interruptions and respond in real-time, which adds complexity.
Decagon evaluates new models whenever they are released, using internal eval infrastructure to ensure they don’t break existing workflows. They focus on instruction-following intelligence, which benefits their use case, even as models improve in other areas like reasoning.
In this episode of the AI + a16z podcast, Decagon) cofounder/CEO Jesse Zhang and a16z partner Kimberly Tan discuss how LLMs are reshaping customer support, the strong market demand for AI agents, and how AI agents give startups a a new pricing model to help disrupt incumbents.
Here's an excerpt of Jesse explaining how conversation-based pricing can win over customers who are used to traditional seat-based pricing:
"Our view on this is that, in the past, software is based per seat because it's roughly scaled based on the number of people that can take advantage of the software.
"With most AI agents, the value . . . doesn't really scale in terms of the number of people that are maintaining it; it's just the amount of work output. . . . The pricing that you want to provide has to be a model where the more work you do, the more that gets paid.
"So for us, there's two obvious ways to do that: you can pay per conversation, or you can pay per resolution. One fun learning for us has been that most people have opted into the per-conversation model . . . It just creates a lot more simplicity and predictability.
. . .
"It's a little bit tricky for incumbents if they're trying to launch agents because it just cannibalizes their seat-based model. . . . Incumbents have less risk tolerance, naturally, because they have a ton of customers. And if they're iterating quickly and something doesn't go well, that's a big loss for them. Whereas, younger companies can always iterate a lot faster, and the iteration process just inherently leads to better product. . .
"We always want to pride ourselves on shipping speed, quality of the product, and just how hardcore our team is in terms of delivering things."
Learn more:
RIP to RPA: The Rise of Intelligent Automation)
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here), including articles, projects, and more podcasts.