We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How AI Agents Are Transforming Customer Support, with Decagon’s Jesse Zhang

How AI Agents Are Transforming Customer Support, with Decagon’s Jesse Zhang

2025/1/16
logo of podcast No Priors: Artificial Intelligence | Technology | Startups

No Priors: Artificial Intelligence | Technology | Startups

AI Deep Dive AI Chapters Transcript
People
J
Jesse Zhang
Topics
Jesse Zhang: 我认为AI代理在客户互动和客服方面具有最佳应用前景,因为大型语言模型非常适合处理这类任务。Decagon的AI代理特别注重透明性,让企业客户能够了解代理的决策过程、使用的数据以及答案的来源。AI代理的益处在于节省人力成本和时间,同时提高客户满意度,从而提升客户留存率和转化率。我们与Built Rewards公司的合作案例表明,AI代理能够帮助企业停止客服团队的扩张,并最终减少客服岗位数量,同时提升客户体验。Decagon的核心竞争力在于其构建在AI模型之上的软件层,包括模型编排和各种软件工具,以确保透明性和可控性。对于Decagon的AI代理而言,指令遵循能力比定量推理能力更重要。Decagon的AI代理支持多种沟通渠道,包括语音和文本,以满足不同客户的需求。语音模型的延迟仍然是一个问题,但可以通过各种技术手段来缓解,例如流式传输和语音到语音模型。许多来自数学竞赛背景的人才涌入AI创业领域,这与创业的普及以及这些人才之间的社区联系有关。Decagon与其他AI创业公司之间存在非正式的互相帮助和支持。数学竞赛背景在Decagon的招聘中起到一定的参考作用,但并非决定性因素。未来几年,我期待AI模型在多模态方面的进步,以及AI代理在各种应用场景中的普及,并关注人类在监督和编辑AI代理方面的工作变化。AI代理的成功取决于其易于逐步部署和量化投资回报率的能力。 Elad Gil: (This section would contain Elad Gil's viewpoints and supporting evidence if available in the transcript. Since the provided transcript focuses heavily on Jesse Zhang's perspective, there isn't sufficient information to construct a comparable response for Elad Gil.)

Deep Dive

Chapters
Jesse Zhang, co-founder of Decagon, discusses his company's AI-powered customer support platform. Decagon's platform is already used by major companies like Rippling and Notion, significantly impacting their customer service operations and improving customer satisfaction. The platform boasts tangible benefits such as cost savings through reduced headcount and increased efficiency.
  • Decagon provides AI-powered customer interactions for large enterprises and startups.
  • The platform has resulted in significant cost savings and improved customer satisfaction for clients.
  • A case study with Built Rewards showed a reduction of around 65 support agents.

Shownotes Transcript

Translations:
中文

Hello and welcome to No Priers. Today I'm talking to Jesse Zhang, co-founder of Decagon. Decagon is an early stage company building enterprise grade generative AI for customer support.

Founded in August of 2023, their platform is already being used by large enterprises and fast-growing startups like Rippling, Notion, Duolingo, ClassPass, Eventbrite, Vanta, and more. Jesse, welcome to No Priorities. Of course. Thanks for having me, Eli. Absolutely. Maybe we can start a little bit with sort of your background and what Decagon does. You're a serial founder. You started another company before this, an antique bot.

And, you know, now you and Ashwin have started Decagon and you've been working on it for a while and have seen some really interesting adoption from companies like Rippling, Notion, Eventbrite, Vanta, Substack and many others. Right. So you've really started to carve out a real space for the company. Can you tell us a little bit more about what Decagon does, how it works, what the focus is of the company?

Of course, yeah. So quick background on me. Grew up in Boulder, did a lot of math contests, stuff like that. Growing up, studied CS at Harvard. As you mentioned, started a company right out of school. That company was eventually bought by Niantic, and then I left to start this company. Ashwin and I, we met through mutual friends, officially met at this VC offsite. And when we got together, we were like, okay,

Biggest learning from the first company is that you can't really overthink things too much. We started by just kind of obviously being interested in AI agents. It's very exciting technology, arguably like the coolest thing from this generation. And we just talked to a bunch of customers like the ones you listed. We, I think over the years have gotten a lot better at figuring out how to talk to folks and what questions to ask.

And through that process, we kind of arrived at our current use case as maybe what we think is like the golden use case for for these agents, which is customer interactions, customer service. The use case is very tailor made for what LLMs are good at. And so we started building from there. Right. And we still weren't thinking too much about, you know, division or anything yet. It's just like, all right, we had a lot of customers in front of us. How can we make it so that they're happy and they really like what we're building?

And then that led to kind of where we're at now. I would say right now as a company, Techagon, we ship these AI agents for folks to use on the customer service, customer experience side. The thing that's made us special so far is we have a huge sort of focus on transparency, I guess. So when people use us, especially these larger companies, it's very important for them that the agent is not a black box.

that they feel like, okay, even though LLMs are cool and there's a lot of things you can do with them, that they can see how decisions are being made, what data is being used, how do you come up with answers, and if I want to get feedback, I can, that sort of thing. So currently we're in production with a bunch of these large folks that have large support teams.

So pretty much any company that has a large sizeable support operation is a good fit for us. That makes a lot of sense. It's interesting because I feel like one of the things that's been really striking over, say, the last year in the AI world is the CEO of Klarna posted on X or tweeted about the impact that AI has had on their customer support or service team. And Klarna is sort of like a buy now, pay later service out of Europe. And

You know, his tweet basically said in the first four weeks, they handled 2.3 million customer service chats. The customer satisfaction was on par with humans. It's 25% reduction in repeat inquiries relative to people. It resolved customer errands or issues in two minutes versus 11 minutes for a human agent. And instantly they were live 24-7 in 23 markets and 35 languages because AI supports so many things. And so, yeah.

you know, had a huge impact on that company. And I think they sort of shifted 700 full-time agents to do other work, right? In terms of the impact of Klarna itself as an organization. What sort of impact have you been seeing with your customers as they adopt this sort of technology? And how do you think through the lens of, you know, what you're really bringing to these customers and the sort of satisfaction that their own

end users have. It's an interesting way to think about it, which is, you know, all these people are shipping this use case, right? It's like, you know, there's a lot of evangelists out there, which is nice. The Clark article is awesome. There's a lot of tailwinds for the industry. And I think one interesting thing we've seen is that the benefits that people get are all roughly in the same vein, but different people prioritize different themes.

And so at this point, it's not really even that much of a hot take to say that like in a couple of years, these agents are going to be super pervasive. People can use them for all these customer interactions. They're going to be everywhere. And so to your point, like what what is the benefit? So for our customers, it's always the same. It's what fraction of total work in this case, like conversations and the agent do. So how much work is this saving us?

And then two, how much happier are our customers? What's the customer satisfaction score, MVS score? Those two are often just the leaders by far. As I said before, different people maybe value each one slightly differently.

And then there's kind of other things like, OK, well, we want to make sure that there's accuracy, right? Like if we're in a regulated industry, this has to be very accurate for us. So those are kind of where the benefits lie. It's like we're saving a bunch of money. We're saving kind of time and resources. But also on the other side, we're kind of making the customers happier. And so that can lead to higher retention, more conversions. And it's kind of a lot more upside there. It's like you're giving every customer a personal concierge, basically, in their pocket.

that they can chat with any time, any language, 24/7. And that can be pretty transformational for a lot of businesses. Is there any example customer that you could talk about as a case study in terms of the impact this has had, how it's lifted their metrics, the success they've seen using Dekogon? Of course, yeah. So we just did a big case study with a company called Built Rewards. Great use case for us. They have a very large user base growing very quickly. You're actually using it to either make points or make payments.

A lot of my friends use the product. And then as a result, because you have a large customer base, like people have questions, people will have things that they need help on. And so like the the the number of support inquiries basically grows linearly with the number of number of users. And because of that, because they're growing so fast and basically exponentially, that means the number of support queries is also growing exponentially. So when they first started using us, that was like the main goal. It's like, holy crap, we're getting overwhelmed by all this volume. Can I help you?

And so the thing that ended up happening there was, yeah, within a basically a month of starting to use us, they were able to stop scaling their their team. And the AI would take over a lot of the automation, just makes everything very smooth. And then now, basically, we're almost almost a year in at this point. They've been able to really restructure their customer support team.

And again, we published a case study on this where they were able to quantify like, okay, what are the savings, right? And so, so far it's around 65 agents of just like headcounts.

So very tangible difference. And then for us, it's also great because we're able to provide them the value there. It's like a very easy ROI. But the customer experience is also a lot snappier. And they get a lot of social media posts about like, holy crap, I just tried the reward support thing. And it doesn't feel like any AI or chatbot system we've ever used before. So that makes us happy.

Could you tell me a little bit more about you built from a technology and infrastructure perspective? So I guess there's the core models that anybody can access, right? The GPT-4, I was in the world of GPT-4, the Cloud Sonnets, et cetera. And then there's all the stuff you've built on top of it to actually make this work well for your specific use case and for customer support agents. Could you tell us a bit more about what you all have had to build over time? Of course. Like you said, everyone has the same access to the same models.

We see ourselves very much as a software company. And we're obviously doing a lot of work around AI and like using the AI models a lot. But I would argue that most applications nowadays are they're real software companies and AI models are kind of tools that everyone can use. And so most of the sort of alpha or most of the specials stuff that you build is on top of models. You see the orchestration layer or the software around it.

For us, there's been a big focus on both. The orchestration layer is kind of how you can use all these different models together. You probably have evals set up that measure how good each model is at certain things. You put them together and the whole goal of putting them together is to mold it around the business logic of the customer. That's part one.

the other thing you build is just very classic software right you have this ai agent there it's all the things i was saying before like you know transparency is a big piece to me it's you really don't want this to feel like a black box that's just they're answering questions and so

How can you build on tooling to see like, okay, what's the data that the agent's using? What steps is it taking? Can I analyze all these conversations that are coming in? If you have a million conversations, it's like, okay, no one's reading all those. So how can you make it so that the AI, the LLM can read every single conversation, tell you how stuff's going, find gaps in their knowledge, give you a breakdown of like, okay, here are the big categories you should care about. There's been a trend here that's been interesting.

So that's all the software around it that we're building. And that's typically how it's structured. And the orchestration layer, I think it's going to be different for every agent, right? Like our agent versus like a coding agent, that orchestration is going to look pretty different. But at the end of the day, you're kind of just building a sort of a structure on top of the LLMs. Yeah. It seems like we're very early in the days of true agentic stuff. And that includes the ability to sequence things

chains of events that include certain forms of reasoning. Obviously, there's things like O1 and other things that have been coming out to start to try and address this, but we seem quite early in the scaling curves. What do you think are the main pieces of technology that are missing to really take you or your sort of vision of the next level in terms of how these agentic systems should work? Yeah, so one thing we were talking about the other day is there's actually different types of intelligence with the AI models and a lot of the recent developments with O1 or

Sonnet and stuff like that has been around, I guess, quantitative reasoning intelligence. So they've gotten better at coding, they've gotten better at math. And for us, actually, those things help, but they're actually not the biggest difference maker. So in our use case, the type of intelligence that matters the most, we would probably describe it as instruction following. So you just have a bunch of instructions, like, can you follow it to a TV? And I'm sure there's other kind of types as well, but...

For us, we're excited to see developments on the other areas too. And people, everyone's saying like, oh, there's a plateau happening with like, you know, the core models and the intelligence. I think when most people say intelligence like that, they're probably talking about the reasoning capabilities. For us, and like the agentic flows that we use, instruction following is a huge piece because you have to like,

You know, just think about like a customer service, you know, SOP or like a plate, like a workflow or something like that. Like you just have to be very accurate about it. And so I know there's research going on about this in the major labs. And I think that's one thing we're looking forward to next year. One other area that it seems like really touches on customer success and customer support and sort of user experience is also voice based support.

And I think one of the things that's a little bit under-discussed in the AI world, because we keep talking about large language models and understanding of text and all, and obviously that stuff is crucial to everything else, but I feel like we almost under-discuss text-to-speech engines and the ability to understand spoken word and then respond with

audio, right? And so there's companies like Cartesia, 11 Labs, OpenAI, Google, et cetera, who are starting to provide some of these services and APIs. How much of an impact does that have on what you're doing? Or is that a separate type of product? Or how do you think about the voice component of these things? Great question. A huge impact. So we have customers now trying our voice agents. And if you just think about our space, right, like you have

The overall problem is the same, which is you have a bunch of customers. They have questions or issues or things you need to talk about. And the channel really doesn't matter for them. It's like some people prefer voice. Some people prefer chat. Some people prefer email. Some people prefer SMS or something like that. And so our job is to handle all of those. And obviously you start with text because that's the most it's like the easier one. It's like easy to evaluate for the customer as well.

I think just now you're getting to the point where you have big companies that are very interested in voice and like actually they've seen the results of a text based agent and they're like, OK, well, yeah, you should we should be able to generate voices and do the same thing for phone calls. None of this would be possible without the the models that you just listed. Right. And those companies, so 11 Labs, OpenAI is doing some cool stuff, Cartesia.

And I think there's also been huge strides this year with those models around like how realistic the voices sound. Also, latency matters a lot in our use case, because if you're making a phone call, you expect things to feel very snappy. So, yeah, big, big, big topic for us. And as these companies get better, I mean, we're working with them pretty closely right now on how they're

how you can actually build these things well at scale. But as they get better, that's also going to be huge for us to keep delivering these voice agents. Makes sense. Yeah, my sense is one of the issues is latency in terms of it takes enough time to take an audio stream or somebody's talking, translate that into text, feed that into a language model, and then output it as voice again.

that it feels there's a lot of pauses or people have to kind of wait. And there's different things that people have been trying to do in the background, like streaming the potential solutions back out and then being able to try and shorten that latency timeline. Do you feel latency is still an issue or is it just solved by integrating voice directly into the models in a deeper way for some of these services? Or when do you think latency becomes a solved problem for these sorts of application areas?

I mean, latency is a big deal here, of course, with voice models. So nowadays, you have the voice-to-voice models that we're playing around with. OpenAI is doing a lot of work here. I think there's obviously a lot of trade-offs there. Voice-to-voice latency is great. Sometimes, though, with these production use cases, you do need the extra computation cycles. So fetch data, do multiple model calls,

Or there might be other reasons that you can't do voice-to-voice. And so, okay, that's one option that you would consider. The other one is the one you described where you're kind of going through your transcribing or doing speech-to-text and then doing all the computation within text and then generating the voice at the end.

That always causes a little bit of extra latency, of course. And so, as you mentioned, a lot of folks have figured out fairly clever ways to get around that. You can start generating stuff first. In our use case, you can always do something like, hey, give me a sec. I'm looking up your data. So these are all things we're playing around with.

I think for each customer that we work with, there's different trade-offs. And so we're really trying to base what we build on the things that we're hearing from them and the sort of priorities that they have. That's cool. One thing that I think is kind of interesting is the number of companies in the AI world today that have been founded by people with Math Olympiad or IOI or other sort of backgrounds, right? And I think you were...

sort of involved with math Olympiad stuff in high school. I think Decagon has actually hosted some math Olympiad events for the team, which isn't like your typical happy hour, but there's other, other teams and companies. I mean, before that there was ramp and things like that, but I think the brain trust team and the PICA team and, and,

Cognition, which launched Evan, and then you all kind of have that common thread. Where do you think that comes from? Why do you think this community is now so active in AI? That's a good question. I mean, we're actually all around the same age as well. So we've known each other since middle school, high school. One, it's a great community. For us, we have a lot of people on the team with math contests, coding contest backgrounds.

I think it's more so that this community was always there. Math Contest has been around for a while and a lot of super smart kids that go through that. It's also a great way for folks to kind of get to know each other and get connected and build friendships. And I think the main thing is that now in the last few years, maybe the last five, six years, there's just...

Because startups have been a lot more mainstream, a lot of folks in this demographic have gravitated towards startups as opposed to traditionally it'd be either academia or quant trading and things like that. So they're just the big influx of these super smart, super talented people that come into the startup world. And because there's this community aspect, folks can see what other people are doing and what sort of works and types of companies that people are building that are

I wish I didn't say they're all the same, but I think a lot of folks with these backgrounds are now kind of working on startups. And that's why there's a lot of, you know, I guess, progress in the companies that folks have been building. And are there ways that you all have been sort of supporting each other through the startup journey? Because I feel like every generation, there's sort of a clique of people who built some of the more interesting companies who all kind of interact together.

They provide advice, maybe they angel invest in each other. Like there's kind of a thriving community and every five to seven years it kind of shifts who it is. And I feel like, you know, the IOI sort of math Olympiad community or coding competition communities are kind of very engaged right now. Is there any formal version of that or are you all just kind of informally helping each other?

Yeah, I angel invest in a lot of the companies you just listed. A lot of their founders are angel investors in our company. It's very informal, obviously. It's just casual friends helping each other. I think the main thing is that with company building, there's just a lot of surface area, right? As you know, it's just like, how do you hire people? How do you do sales? How do you build this thing? How do you structure comp? I don't know. There's infinite things.

So, yeah, having the other data points is obviously super helpful. So I hang out with them quite often, play games, play card games. It's a Chinese version of bridge that I play with a lot of these folks quite often. And it's just it's fun where you just kind of hang out. Everyone's kind of in this relative same stage of life. And so, yeah, like you said, there is definitely a lot of camaraderie and help that goes around.

Is coming from this background from the sort of Math Olympiad community impacted at all how you think about hiring or your hiring practices at Decagon? A little. I mean, if someone else has the same background and has gone through the same either contests or programs, obviously that is pretty good signal since I have a good idea of what those people have done. My co-founder, Ashwin, also a similar background. He didn't grow up in the US, but in India, he did a lot of these contests as well. And so

Yeah, I think there's some correlation with people who just as kids just did a lot of this stuff. And then, you know, now we're all adults and, you know, there's there's some sort of signal there when you're when you're talking about hiring. But for the most part, like it's there's there's so many talented people here, whether or not you did math contests or not.

NSF, you know, at Declan, at other companies that I think our hiring process has been more or less the same. It is like a nice sort of trigger for events, I guess. So when you host these events, you know, people come out, you can get a nice community of folks that are interested in the same things. And we're going to be hosting probably more in some, not all of them going to be like contest based, obviously, like, you know, puzzles and things like that, where you just get a lot of fun engineers and people bringing their friends. And that's pretty important to us.

And then I guess for AI writ large, what are you most excited about in the coming years? Or if you were to extrapolate out 12 to 24 months, what are you anticipating most keenly or what are you waiting for?

So obviously, the model's getting better is awesome. The model's getting better across different modalities, also awesome. We talked about voice. There's also other parts, other modalities that are also tangentially interesting to us. So you talk about a lot of our companies are-- a lot of our customers have software products. And so it'd be awesome if you're asking questions to AI agents, and it has the context of your entire screen and all the interactions you've done and stuff like that. That would be great. And you can even go a step further and have it

actually like help you navigate stuff so there's just so much you can do there where you talk about the other modalities or even just more advanced model capabilities like we've seen the computer use demo from anthropic uh probably in my opinion not production ready yet but as that gets better there's a lot of you know cool things you can do there so on the model side that's one thing we're excited about uh on the it's not like core model side i think one thesis we have is

As the years go by, again, AI agents, I think at this point, undeniable that there's going to be a reasonable explosion of them where there's use on a bunch of different use cases. I think some use cases will take longer than others, but the value that they're providing is pretty undeniable. So there's definitely going to be a lot of AI agents out in the world, in our use case, customer service and other use cases.

But one thesis we have is that the nature of the work of human agents and people like us is also going to change pretty drastically. And one of the things that are going to change is that there's going to be a lot more people that are supervising and editing agents. And so that's something we think about. We're excited for a lot of the sort of innovations there because right now, a big part, like I said before, we care about letting people

you know, the human agents for our customers and their leadership team to go in and make changes and like monitor the agents just have a lot of like visibility and control. And what does that look like? Right. Like if you compare it to a human, if you're monitoring a human, you kind of you can give them feedback in real time. You can be like, oh, no, no, don't do this. Like you did this thing wrong. Like, please do this next time.

when you're doing that where the ai here there's like a lot of different possibilities because you know they have some things that are different than humans right they're infinitely scalable they're um they're like you can you can like really hard code things sometimes so um that's the other area probably going to next year that's where we're looking forward to that's really cool and um do you view that as a main area of differentiation for you relative to some of the other folks in the market providing customer success and support fully

yeah right now that's that's probably the biggest thing when we the interesting thing about our space is that and i think this will probably be true for a lot of ai agent spaces is that the results are very quantifiable like you're basically taking the agent and you can you're benchmarking against okay how good would a human be and like how much money is this saving me like how much better quality is the customer experience

And so because of that, when people evaluate us in our space, it's pretty like quantitative evaluation. You're like, OK, cool. This kind of works. Like, let me just put you out into production for 1% of the volume and build up from there and maybe do that with another option. Or, you know, a lot of the old school companies like Salesforce are going to be this is a very exciting space for them, too. So they're going to have alternatives. And then you just benchmark everyone, right? Like, how good are the stats? How good are the metrics? How good of a job is everyone doing?

And I think so far we've been performing very well. And the main reason for that is this sort of transparency piece, giving people observability, explainability, control over the AI. And there's still a long way to go in that field, right? Like there's still so much more you could do. And that's been our specialty so far. That's great. And I've had some conversations with your customers over time as people have been trying some of these agents have called me to ask questions about different companies in the space and everything else. And the three things they tend to point out is you all ship really fast.

You're very responsive as a team and company. But third and most importantly, just the product tends to outperform. And so I think that's really been, you know, great to watch over time. How do you think about the areas where AI agents are going to be successful versus not successful in the short run? So basically, one of the things that we have been thinking through, and this is something that was pretty big for us when we were first starting out.

is that a lot of there's going to be a huge variance between the different types of agents and how successful they'll be and how quickly they'll take the rollout. Because when we were first starting the company, right, like we we were pretty open to what to build and we knew that agents was exciting. At that point, we didn't even know that if there would be any real use cases that would emerge even in the next 12 or 24 months. But we're kind of exploring.

I think our view is that for the vast majority of use cases right now, it is still like there's not going to be real commercial adoption with the state of the current models because of a bunch of things. So one big thing is that if in a lot of spaces, there's really no structure there to incrementally build up. It has to be good, almost perfect off the bat.

So if you think about like a space like security or something like that, where, OK, OK, you have all these like sims out there and it's like it makes sense. There's like tons of logs like that's that's perfect for AI models. But the goal of that job is like you need to catch like any small thing that happens. And so because the models are inherently non-deterministic, it's very hard for buyers to really trust a Gen AI solution there.

And so especially agentic solution. So like I think that option there is going to be really, really, really slow, a lot slower than people think, even though like people have cool demos and, you know, see things seem to work like just getting really into rise options can be very slow. So that's that's like one interesting thing we've been thinking about. And the other side of that is that there are also a lot of spaces where, you know, on the surface it seems like, oh, like, yeah, I just be perfect here.

But then the sort of follow up is that it's actually not that easy to quantify the ROI that's happening. I would, you know, one example I would give this is like, you know, there's a lot of like text to SQL companies, like stuff like that, where you could kind of see it working. But basically, immediately everyone's reaction is, oh, this is cool. But, you know, we're still going to have to have someone like monitoring it and editing it. And so it becomes kind of a copilot.

Okay, cool. So then how do we measure how much we should pay for one of these agents? It's very difficult because most teams don't have that many data scientists anyways. And so if you're claiming that you have an AI agent data scientist, it's like, okay, let's benchmark you against a real one. You're probably not going to be able to replace a real one. So I think that's the sort of thing where it's very hard to quantify the RMI. You're saving some people time, but

Because of that, if I'm a large company, it's hard for me to justify, okay, I'm going to give you a large contract for this AI agent data scientist. So I think those are the things that we were thinking through. We're not thinking through. In the moment, we're obviously just asking customers what their willingness to invest in certain things is. But in hindsight, I think we're looking back on the last year. That's been a big...

thing that's been true which is the use cases that emerge like you have to have those two qualities like it has to be able to be something that can be rolled out slowly and doesn't have to be perfect off the bat but it's already providing value right like i think coding agents is like a good example of this where um you can just like section off some tasks for them and like they'll do it

And the other piece is the ROI. You have to really be able to easily quantify the ROI. In our case, luckily you have the support agent teams and people track metrics very closely. So that's something we've been thinking about. I think the takeaway from that is probably more bearish on a lot of these AI agent use cases in the near term. But I think as these models get better, they'll unlock a lot of new cases.

Super interesting. Jesse, thank you so much for joining us today. Thanks a lot. Thanks for hosting. It's great seeing you. Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.