We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Can AI Agents Finally Fix Customer Support?

Can AI Agents Finally Fix Customer Support?

2024/12/18
logo of podcast AI + a16z

AI + a16z

AI Deep Dive AI Insights AI Chapters Transcript
People
J
Jesse Zhang
K
Kimberly Tan
Topics
Jesse Zhang: 我认为随着时间的推移,AI智能体将越来越依赖自然语言,因为这是大型语言模型(LLM)的训练方式。理想情况下,一个超级智能的AI智能体就像一个熟练的员工,可以学习、理解反馈并更新自身。它能够处理各种信息,并根据用户的反馈不断改进。我们构建AI智能体的目标是让它像一个熟练的员工一样工作,而不是依赖于复杂的决策树。 Decagon专注于为客户服务构建AI智能体,灵感来源于我们自身与客服电话沟通时的糟糕体验。我们关注的是围绕AI智能体的工具建设,以便人们可以更好地构建、配置和管理这些智能体,而非将其视为黑盒。我们的品牌建立在为AI智能体提供周全的工具和支持之上,使其不成为一个黑盒。 从消费者产品转向企业软件,是因为企业软件的问题更具体,有实际的客户、需求和预算,更容易优化和解决问题。 与传统的决策树相比,大型语言模型(LLM)在客户支持方面具有更高的灵活性和个性化能力,能够处理更复杂的问题。LLM的灵活性使其能够更好地个性化客户支持,提高解决问题的效率和客户满意度。LLM赋能的AI智能体能够实时提取数据、执行操作以及进行多步骤推理,从而处理更复杂的用户问题。LLM的出现使得AI智能体在处理客户支持方面取得了显著进步。 Decagon定义的AI智能体是一个协同工作的LLM系统,能够串联多个LLM调用,甚至递归调用,以提供更好的用户体验。AI智能体能否从演示走向实际应用,关键在于其应用场景的特性,而非技术堆栈。AI智能体应用场景的ROI必须可量化,例如通过解决问题的百分比来衡量,才能说服客户付费使用。AI智能体应用场景需要具有增量性,即使不能完美解决所有问题,也能带来显著价值。 AI智能体与人类交互的体验取决于客户的选择,有些客户希望将其拟人化,有些则希望明确其AI身份。Decagon通过整合用户和业务逻辑上下文,实现AI智能体的个性化,从而提升用户体验。企业客户在部署AI智能体时,最关注的是安全防护措施,例如规则、监督模型和对恶意行为的检测。AI智能体的安全防护措施包括规则设定、监督模型和恶意行为检测等。Decagon的核心理念是赋能用户自主构建和管理AI智能体,包括自定义安全防护措施。Decagon专注于提供工具和基础设施,让用户能够自主构建和管理AI智能体,并自定义安全防护措施。Decagon致力于提供易于使用的工具,即使是非技术背景的用户也能轻松构建和管理AI智能体。 企业可以通过优化知识库结构和API设计,更好地支持AI智能体的应用。企业可以通过改进知识库结构和API设计,提高AI智能体的效率和准确性。未来AI智能体的交互方式将越来越自然,更像与人类的对话。未来AI智能体的交互方式将越来越自然,更像与人类的对话,而不是依赖复杂的决策树。构建一个真正可投入生产的AI智能体比简单的GPT封装要复杂得多,需要解决许多实际问题。Decagon销售的是软件,LLM只是其组件之一,客户购买的是软件的整体功能,包括监控、报告、反馈等。将AI智能体投入生产环境需要解决许多问题,例如幻觉、恶意攻击、延迟和语气等。许多企业选择Decagon是因为他们不想自己处理AI智能体部署过程中遇到的复杂问题。通过将敏感操作放在确定性系统中,可以有效降低AI智能体面临的安全风险。企业通常会进行安全测试,例如红队测试,来评估AI智能体的安全性。Decagon鼓励客户进行安全测试,例如红队测试,以识别和解决潜在的安全漏洞。未来可能会出现针对AI智能体的新的安全认证标准。 作为一家应用AI公司,Decagon需要在保持产品路线图可预测性的同时,及时跟进最新的技术发展。Decagon的软件开发工作与传统的软件开发类似,主要挑战在于及时评估和选择合适的LLM模型。Decagon会定期评估新的LLM模型,并根据评估结果进行切换。Decagon更关注LLM模型的指令遵循能力,而非推理能力。Decagon内部的评估基础设施对于快速迭代和确保AI智能体性能至关重要。Decagon内部的评估基础设施对于快速迭代至关重要,因为它可以帮助团队快速评估模型变化的影响。多模态对于AI智能体很重要,但其普及取决于技术和市场需求。在构建了完善的工具和逻辑之后,添加新的模态(例如语音)对Decagon来说并不困难。多模态AI智能体的普及取决于客户的接受程度和技术成熟度。从文本开始是合理的,因为文本更容易被客户接受和监控。语音AI智能体面临的技术挑战高于文本AI智能体,例如延迟和自然度。 Decagon在早期阶段就得到了很多客户的积极回应,这出乎意料。客户对Decagon的AI智能体解决方案表现出极大的兴趣,这与该解决方案的时机和应用场景密切相关。企业在采用AI智能体时,更关注的是其价值和客户满意度,而非幻觉问题。AI智能体的定价不应该基于传统的每用户许可模式,而应该基于其工作产出,例如每次对话或每次解决问题。传统的每用户许可模式不适用于AI智能体,因为AI智能体的价值不取决于用户数量,而是取决于其工作产出。AI智能体的定价应该基于其工作产出,例如每次对话或每次解决问题。Decagon采用按对话付费的模式,因为它比按解决问题付费更简单、更可预测。按对话付费的模式比按解决问题付费更简单、更可预测,也更能避免一些潜在的激励问题。 未来,AI智能体在工作场所中的应用将显著增加对AI管理人员的需求。AI管理人员需要具备观察、解释和构建AI逻辑的能力。一些对错误容忍度极低的行业,AI智能体可能更多地扮演辅助角色而非完全自主的角色。 Kimberly Tan: 如果一个想法看起来很明显,但没有一个明确的解决方案,那么就意味着这个问题实际上还没有得到解决。Decagon在早期阶段就获得了大量客户的关注,这表明市场对AI原生客户支持解决方案的需求巨大。AI智能体的采用率取决于其ROI是否清晰可衡量。企业在采用AI智能体时,更关注的是其价值和客户满意度,而非幻觉问题。 Derrick Harris: (节目主持人的角色,没有表达核心观点)

Deep Dive

Key Insights

Why are AI agents gaining popularity in customer support?

AI agents offer higher personalization, flexibility, and the ability to handle complex workflows, which improves customer satisfaction and resolves more inquiries compared to traditional chatbots or decision trees.

What is the difference between a chatbot and an AI agent?

Chatbots rely on predefined decision trees and simple NLP, often leading to frustrating experiences. AI agents, on the other hand, use LLMs to handle complex inquiries, adapt to different situations, and provide personalized support by chaining multiple LLM calls and integrating business logic.

Why do most customers prefer a per-conversation pricing model over per-resolution?

Per-conversation pricing offers simplicity and predictability, as defining what constitutes a resolution can be ambiguous and lead to misaligned incentives. Per-resolution pricing could encourage deflecting difficult cases, which customers dislike.

What challenges do incumbents face when adopting AI agents?

Incumbents struggle because AI agents cannibalize their traditional seat-based pricing models. They also have less risk tolerance due to their large customer base, making it harder for them to iterate quickly and improve products compared to startups.

What are the key skills needed for an AI supervisor in the future workplace?

AI supervisors will need skills in observability (understanding how AI makes decisions) and decision-making (providing feedback and building new logic). They will also need to monitor AI performance and ensure it aligns with business goals.

How do AI agents handle security concerns in enterprise settings?

AI agents use deterministic APIs for sensitive tasks, reducing the risk of non-deterministic outputs. Enterprises often conduct red teaming to stress-test the system, ensuring it can handle potential attacks or misuse.

What is the role of personalization in AI agents for customer support?

Personalization involves tailoring responses to both the user and the specific business logic of the customer. This requires context about the user and access to business systems, enabling the agent to provide a more accurate and relevant experience.

Why is the customer support use case well-suited for AI agents?

Customer support has quantifiable ROI (e.g., percentage of inquiries resolved) and allows for incremental adoption, meaning agents don’t need to be perfect from the start. This makes it easier for businesses to adopt and scale AI solutions.

What are the technical challenges of implementing voice-based AI agents?

Voice agents require lower latency and more natural interaction, which makes them technically more challenging to implement than text-based agents. They also need to handle interruptions and respond in real-time, which adds complexity.

How does Decagon manage the rapid evolution of LLMs?

Decagon evaluates new models whenever they are released, using internal eval infrastructure to ensure they don’t break existing workflows. They focus on instruction-following intelligence, which benefits their use case, even as models improve in other areas like reasoning.

Shownotes Transcript

In this episode of the AI + a16z podcast, Decagon) cofounder/CEO Jesse Zhang and a16z partner Kimberly Tan discuss how LLMs are reshaping customer support, the strong market demand for AI agents, and how AI agents give startups a a new pricing model to help disrupt incumbents.

Here's an excerpt of Jesse explaining how conversation-based pricing can win over customers who are used to traditional seat-based pricing:

"Our view on this is that, in the past, software is based per seat because it's roughly scaled based on the number of people that can take advantage of the software.

"With most AI agents, the value . . . doesn't really scale in terms of the number of people that are maintaining it; it's just the amount of work output. . . . The pricing that you want to provide has to be a model where the more work you do, the more that gets paid.  

"So for us, there's two obvious ways to do that: you can pay per conversation, or you can pay per resolution. One fun learning for us has been that most people have opted into the per-conversation model . . .  It just creates a lot more simplicity and predictability.

. . .

"It's a little bit tricky for incumbents if they're trying to launch agents because it just cannibalizes their seat-based model. . . . Incumbents have less risk tolerance, naturally, because they have a ton of customers. And if they're iterating quickly and something doesn't go well, that's a big loss for them. Whereas, younger companies can always iterate a lot faster, and the iteration process just inherently leads to better product. . .  

"We always want to pride ourselves on shipping speed, quality of the product, and just how hardcore our team is in terms of delivering things."

Learn more:

RIP to RPA: The Rise of Intelligent Automation)

Big Ideas in Tech for 2025)

Follow everyone on X:

Jesse Zhang)

Kimberly Tan)

Derrick Harris)

Check out everything a16z is doing with artificial intelligence here), including articles, projects, and more podcasts.