We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Product-Led AI: Adept CEO David Luan on Upleveling Human Work

Product-Led AI: Adept CEO David Luan on Upleveling Human Work

2024/5/15
logo of podcast Greymatter

Greymatter

AI Deep Dive AI Chapters Transcript
People
D
David Luan
Topics
David Luan:我最初对机器人感兴趣,但早期的AI技术并不成熟,所以我决定进入研究领域。在OpenAI,我看到了AI从独立研究到大规模项目的转变。在谷歌,我看到了AI模型产品化的初步尝试。为了实现通用人工智能,我们需要一个整合研究、工程和产品的组织结构,因此我决定创办Adept。 David Luan:许多人将AI视为取代人类、追求经济价值的竞赛,但我对此持有不同看法。AI不应被视为一场竞赛,而应被视为一种可以提升人类能力的工具。通过构建为人类服务的AI系统,我们可以创造新的认知技术,从而提升人类的能力。人类擅长于思考“做什么”、“为什么做”以及“与谁协调”,而AI可以帮助我们摆脱繁琐的执行工作。 David Luan:Adept是一个AI代理,可以帮助人类在计算机上完成任何他们需要做的事情。真正的AI代理应该能够根据目标采取行动,而不仅仅是聊天机器人。Adept致力于让知识工作者拥有AI队友,帮助他们完成任务,提升工作水平。Adept的目标是让AI成为可以协作、解决问题和执行任务的队友。 David Luan:Adept垂直整合了基础模型和代理产品,与其他公司有所不同。Adept的基础多模态模型专门为代理用例设计,具有更强的用户界面理解能力。Adept的模型专注于知识工作,对企业用户更有价值。对于企业级代理,重要的是遵循标准操作程序并获得信任。Adept的架构包括指令跟随层和用户界面,以便人类可以轻松监督代理的行为。AI代理的用户体验不应该是聊天机器人,而应该能够让人类轻松监督代理的行为。 David Luan:模型训练领域正在快速商品化,长期来看,能够差异化数据生成方式的公司将具有巨大优势。通过端到端优化基础模型与代理用例,Adept正在收集代理数据,使整个系统更加智能。从最聪明的知识工作者那里学习,可以获得更有价值的数据。对于代理任务,可靠性至关重要。 David Luan:AI的概念将逐渐淡化,所有公司都将成为AI公司,专注于客户和市场。大部分代理任务都是高度定制化的,Adept希望能够处理这些定制化的任务。代理的稳定状态将重塑计算机的使用方式,因此设计至关重要。代理的未来是人类与机器上的一个或一组代理进行协调和交互。如果代理像人类一样使用计算机,那么我们可以使用现有的基础设施。训练和运行代理所需的计算量将是巨大的。人们愿意为代理完成知识工作支付高昂的推理费用,因此我们将看到人们投资于强大的数据中心。

Deep Dive

Chapters
David Luan's journey from robotics and research to co-founding Adept is discussed. He explains the shift in AI from independent research to large-scale projects and the realization that a product-centric approach is crucial for achieving AGI. The lack of existing organizational structures that combined research and product development led to the creation of Adept as a startup.
  • David Luan's early interest in robotics and his shift to AI research.
  • His roles at OpenAI and Google Brain, focusing on LLMs.
  • The realization that a product-centric approach is essential for AGI and the subsequent founding of Adept.

Shownotes Transcript

Translations:
中文

Hi, welcome to Gray Matter, the podcast from Greylock. Today, we're featuring another guest episode of Product-Led AI, the new podcast series hosted by Greylock partner Seth Rosenberg, where he talks with leading AI builders who are exploring opportunities in the application layer of AI. This week, Seth talks with David Luan, who is a CEO and co-founder of Adept. The company is developing AI agents for the enterprise.

You can subscribe to Product-Led AI wherever you get your podcasts, and you can sign up for Seth's weekly LinkedIn newsletter to make sure you never miss an episode. You'll find links to all of this and more on the series website, productledaipod.com. It's also linked in the show notes. Now, here's Seth with Product-Led AI.

Hi, I'm Seth Rosenberg. I'm a partner at Greylock and the host of Product-Led AI, a series exploring the opportunities at the application layer of AI. My guest today is David Luan, who is the CEO and co-founder of Adept. The company is developing multimodal agents designed to work alongside humans in any profession. David has been an early builder, researcher, and pioneer in AI.

He was among the first few dozen employees at OpenAI, where he led all of OpenAI's engineering. He then served as co-lead of Google Brain, working on frontier large language models. He co-founded Adept in 2022, and the company has stood out for its human-centric approach to AI, and eventually AGI.

David, thanks for joining today. Very excited to dive in to the nuances of building agents with Adept. Thanks. So maybe to kick it off, obviously, you've been a pioneering builder in AI since the early 2010s. So when was the moment you knew that you wanted to actually break out and start your own company?

Yeah, I mean, I think the easiest way to answer this is to tell a very, very short history of the last, what, 20 years or so of AI. So when I started getting going, I was actually initially first drawn to robotics. So I thought the idea that you could write these programs that made these physical devices do smart things in the world was like one of the coolest things possible.

But like way back when, like during that period, like mid 2000s, like nothing really worked, right? Like we were misidentifying horses as dogs and the Tay chatbot came out and started insulting everybody on Twitter after 24 hours. It was all so, so early. And so I decided the right answer to do back then was to get into research. And so after multiple twists and turns, because of previous work I had done in leading research oriented teams,

the founding team at OpenAI brought me in to lead research and engineering there as the VP Edge. So I did that for three years and I think saw this transition from like the previous era of AI of just you and a couple of pals trying research ideas independently and writing a paper to this like next world of like giant scale of projects, which we did successfully with the GPTs and back then the Robot Hand Project and Dota and all this other stuff.

And after that, you know, I ended up going to Google to lead Google's giant LLM training effort. And I think what was really interesting about that was sort of seeing the first innings of like true productization on some of these models. And it just became really clear to me that like,

first off, like the recipe for building general intelligence is like increasingly clear. And actually a critical part of that is having a product. Like you need a product because you need something that users interact with, teach the models to be smarter. And, um,

And as a result, the dominant shape of an organization that can win at that doesn't look like what anybody had built before. It doesn't look like the standard thing of like, you know, you have researchers sitting in a corner of a building and then pitching their ideas to engineers and other products trying to get those things landed. It also doesn't look like, you know, um,

two separate orgs with two separate roadmaps. What you really need to do is you need to go start with something. You need to start with a product shape that gets you all the way to general intelligence. And you need to distill that down into what research problems, what engineering problems, and what product problems do you need to solve to build that product shape? Like a full end-to-end combination of research and product. And that sort of org structure didn't exist. And so on top of the actual technical bet we wanted to make, organizationally it was obvious we had to go do it as a startup.

Yeah, I totally agree. The last 10 years were about kind of the foundational research and infrastructure in the next 10 years about putting this into products that actually work. 100%. So...

Obviously, one of the maybe naive narratives around AI, especially the type of AI that you're building, an agent that can act like a human in front of a computer, there's a narrative around AI replacing jobs. I think you have a different take, both on your mission for Adept as well as AI's impact on the world. So maybe spend a moment on that. Yeah. So I think what's really interesting to me about this question is like, is like,

I think it's fundamentally like a mission and values question. And then there's also a little bit of pragmatism in there. But like, if you go look at how everyone has framed AI progress so far, right? Like you go read mission statements of the classic labs. It's always about replacements, about doing things that beat

beat humans at exit, doing a wide better than humans and then creating lots of economic value and the need to figure out how that's going to be redistributed. And there's also framings about AI as being a race, right? A race of how can you get to general intelligence as fast as possible, be better than humans, and then how many companies will potentially have a seat like that and then be able to out-compete each other and prevent others from... All this crazy stuff. It's just one formulation.

And I think that you can just reject this framing entirely. And I think you can reject it not only out of a sense of mission of like, you know, hey, what if we don't want to be building systems that replace people, but also one of just like,

I think potentially a misunderstanding of how technology is even diffuse within society. On the latter, when we go look at things like having a calculator or writing or using a computer, those are technologies we've built as humans over the last couple of millennia that what if turned out has really happened is it's actually improved the cognitive skills of people.

They've become cognitive technologies for people. I think this is a really interesting study that I should probably read again before I quote as definitive. I think that showed that for cultures that didn't have reading, humans' ability to

to be able to extrapolate to new events and imagine things as done by test was a lot lower than that of cultures that had writing, reading and writing. And so I think similarly, like by having these like AI systems that get smarter and smarter that work for people instead of replacing them, we are kind of basically building a new set of cognitive technologies for people that actually end up up leveling humans. And that's a, that's a world I'm much more excited about living in.

Yeah, I totally agree. I think people also underestimate the demand side of the equation and focus on the supply side, right? You can automate certain tasks, but demand doesn't stay constant, right? Mm-mm.

it doesn't, it's just, it's just changing the, it's changing the profile of work, right? Like, I feel like the real human, real human things that at least I'm excited to do, and most people I know are really excited to do, they're excited to figure out the, like, what should we be doing? Why should we be doing it? Who do I coordinate with? And, like, how do I deeply understand the people I'm working with, the people I'm selling to? Like,

Those are really human. And I think the dream is how do you get people to focus on that and not focus on the tedium of like, I have to go like spend eight hours shuffling things around in my database. Right. Or like I have to go like physically, manually, like create a part. Right. Like those things, the execution is what we really want to be able to delegate. So tell us like what is Adept?

Adept is an AI agent that helps humans do anything they need to do on a computer. And so like, let's just break down each part of that. Like what is an AI agent? The agent term has kind of become really diluted these days because it's become the new hot thing to slap on a company.

But the true definition of an agent actually comes from, for folks who have been following, actually the reinforcement learning side of things. And so an agent is an intelligent system that figures out the correct and

end actions to take to help you achieve a goal. So if my goal is, you know, I want to move these end leads from one stage to another in Salesforce, the agent figures out, okay, here are the end actions I need to do to go do that in the most efficient way possible. What an agent is not is a chatbot you just talked to that just talks back at you, right? Like it doesn't do things. It's not taking actions, right? And similarly, an agent is not a single step

API call because that's like kind of the degenerate case. It's it doesn't run a workflow like agents run run workflows for people. So what Adept is doing is is we're making it possible for

every knowledge worker kind of to have this like AI teammate that they can quickly show how to do tasks on their computer and then ask the agent to go do for them from there on out. And it's part of that up-leveling of like human work sort of a mission that we've always had. And I think as our models get more and more intelligent today, they do a lot of things that involve

for example, shuttling data from system A to system B or helping people fill out forms or do employee onboarding or do logistics and supply chain and all these very operational tasks. But every...

incremental amount of work we put into model intelligence gets our agent to help with higher and higher level things. Ultimately to the point where this truly becomes a teammate that is a collaborator that you can talk to and interact with and solve problems together and brainstorm and also have it help you with the execution. 2024, the year of agents. I think it's a good definition of what an agent is, what it is not.

So maybe break down for us the architecture of an agent in your mind, from the foundational model to the orchestration layer, to the integration into enterprise data, to the UI and workflow. First of all, what's your perspective on the correct architecture of an agent? And where do you think IP can be built?

Yeah, so I want to answer that for Adept, and then we can answer that for the general case. For Adept, we look pretty different from a lot of companies in the space because we both control our foundation model for agents, and we control the agent product that enterprises use. And our company is a bet on vertical integration of those two things. So what we do is, what our agent stack looks like is we have a

base multimodal model that is specifically oriented around being a really good, a really good for agentic use cases. So it has capabilities that, um, that, um, others, uh, uh, were just using GPT or Claude or anything like that. Um, can't get, um, so example of that is like, we're extremely good at fine grained understanding of user interfaces. Um, so like, um, like,

our ability, for example, to figure out like, what do you interact with to get a task done is in the 90s. Whereas when we benched that of like GPT and Gemini, they're like between like, like two to four to 15% accurate. So it's like a giant gap. While at the same time, unlike the basic understanding of, of like knowledge work data, like we specialize our work

our models towards knowledge work, because that's what people use in the enterprise. We care a lot less about like cat and dog photos and all the other stuff people put in these models these days. And on knowledge work tasks, actually, even though our models are pretty small and therefore very fast, they, they are, they have, they have higher accuracy than GPTV and cloud three Opus and Gemini 1.5 pro. So, so that's like what we start with. We started with this base model. It's really smart at being agent. It's also very fast. And what we then do is we,

is I think there's also another thing that people in the space may not have fully realized. The dream for agents isn't a giant text box in the sky that you're like, hey, I want you to go do this business task for me, like figure it out. What we learned is actually the most important thing for building useful agents in enterprise is the ability to be handed kind of like a standard set of operating procedures and be trusted to go do that.

So we care a lot about the ability for the agent to follow any constraint on instructions going forward. And so what our stack looks like is sort of that next instruction following layer.

followed by a user interface that makes it possible for humans to easily have oversight of the agent's behavior. And the UX for these things, as we were chatting about beforehand, is not going to be that of a chatbot. Like this idea that humans want to be able to specify everything down to the T in just words, we've actually found to be quite a big limiter on productivity. Yeah, that makes sense.

So this is everyone's favorite debate in AI is, you know, who's going to win? You know, is it going to be one large model to rule them all? Is it going to be open source fine-tuned models? Is it going to be large models trained on specific use cases, like for the agentic use case? Obviously, you have a very opinionated point of view for ADEPT, but what's your take on how the space evolves in terms of

you know, the number of large models available and, you know, what type of use cases each one is focused on and in what cases for product builders it makes more sense to just use GPT-4 out of the box and focus on other areas versus fine tuning. So I think there's a tremendous amount of fog of war. I have a fairly strong opinion, but I think that new information, I think, could change a lot of this. My view right now is that

My view right now is that there's a crop of companies that are training, that are just in the business of training models. So like the OpenAI is with, well, OpenAI is not quite right because they also have their own products, but like the, just the GPT training part of it, Anthropix entirely around training models, Cohere, Mistral, et cetera. I think that

that space is really quickly commoditizing because it's the same corpuses of training data. The architectures are mostly similar. There's no real long-term defensibility in ideas in that space because they diffuse within an order of months. And as a result, it really just becomes a cost of capital game. And I think that there will be end organizations that will be able to afford that.

open source is getting really good really fast like meta is obviously investing a lot in the space really smartly for them but that also just sets the floor right like if you've got a model that isn't as good as the next llama like what you don't really exist as a business you don't get to charge more than the cost to compute right um and so um so um i think companies in that space will have to look for alternate ways of making money besides just having better models um

But I think that long-term, what's going to happen is that people who figure out how to hook up

very differentiated approaches of generating data for the most valuable tasks to the base models that they also control will have tremendous leverage. And I think like, that's why I remain bullish on open AI. Like I feel like the fact that they have, that they have chat GPT, that they have a lot of developers using chat GPT actually for code use cases, all of these things gives them a flywheel where they could overtake someone who's literally downloading more stuff on the internet.

Right. Whereas an API business doesn't give you an opportunity to have a data flywheel. Yeah, I think that's that's really smart. And so tell us a little bit about how you're designing the front end of Adept, the agent, in order to maximize valuable data collection. Yeah. So I think just I think it's a really good question because that's exactly like.

our bet is that by end-to-end optimizing our foundation model with the agent use case, we're collecting agent data that makes the whole thing smarter. I think that the way that I think about it is there's two parts to it. One of them is like, why does this stuff make the model smarter? I think when we go look at, when we talk to ChatGPT or stuff, there's been so much RLHF data. When

When you basically take human feedback on how well a model did something to improve the model's behavior through a reinforcement learning or something that looks like reinforcement learning sort of loop. Like it's like these models out of the box are trained to just clone human behavior. So they don't understand reward. You give it a reward signal by collecting lots of data about what good and bad looks like. And you teach them all to follow the follow the good.

So that's RLHF. With things like ChatGPT, a lot has been done on it in the summarization, chat, all those other spaces.

Our goal with ADEPT is to do that same loop, but with agents. And that's so important because out of the box, these foundation models are really unreliable for agent tasks. And reliability is almost the only thing that people care about, right? Like you go ask somebody to book a flight for you in the background, even for a consumer use case, which we don't cover. Like you expect that to do a good job and not to have like already swiped your credit card for like something entirely different on the wrong date, right?

And what are some interesting techniques of how to make RLHF actually productive? Because I've also heard stories of, you know, after a model's out in the wild, you know, and they get human feedback,

parts of it can actually get worse. Because for example, humans are bad at probability or bad at math. Yeah. So basically I think the answer is you just want to be, this is actually a nice thing of our enterprise strategy is you want to be learning from the, you want to be learning from the smartest knowledge workers in the world. I think that like, yeah,

collecting more RLHF data for how to chit chat about your day doesn't make your model smarter, but in a work setting, like all these people are like, we're all paid every day to try to use intelligence as a way to get an advantage in business. Right. And so like the data that comes from people doing that is just inherently more valuable and less prone to some of the same problems you were talking about than like arbitrary chit chat data. That makes sense. And yeah,

What's your take on how vertical specific agents will become, right? Is Adept going to be my accountant, my lawyer, my analyst, my researcher, et cetera? Or where do you think lines will be drawn? I think that like my boring take on this is that

is that within a couple of years, the whole AI concept is going to fade in the background. And we'll just consider all companies will be AI companies. So it just won't even matter. And it's just going to be like a lot of amazing enterprise SaaS businesses built on AI agent technology that each has found a niche and is like

beating each other on things like go to market and understanding the customer and less on is my agent 5% smarter. I think there'll be a lot of companies that look like that, that'll cover like head use cases, like invoice processing, right? Or like customer support even, right? And so there will be many great companies built that way. But I think the reason why we're really bullish on the path we're taking on generality with Adept is...

is that if you just look under the hood a little bit, even some of the most obvious use cases in enterprises of workflows people want to do sound like they should be really common, but are extremely customized to that business and their customers. And so if we were to apportion of the overall pie of agent tasks to be done, maybe like 10% of it

or less, right, is cookie cutter and 90% is like, I need to teach my specific workflow to the agent and we want to eat that 90%. Yeah, I feel like you were several years ahead of kind of where the market is, right? Where everyone everyone's been enamored to your point around chatbots and image generation, video generation. But it feels like the real value is going to come from actually executing work. And that's actually a different problem. Yeah. And our job is to execute custom work basically.

And so on that point, again, you know, being an independent thinker here, I would say a lot of people, including OpenAI, have had success by just launching products into the wild. There's this magic effect where if you're, you know, first to creating an agent that really works, right? And Adept is obviously one of the very few companies in that category. Like just releasing it to the wild can create a lot of Twitter buzz. You've taken a different approach on go-to-market. You've decided to go enterprise first. Talk us through your thought process.

Yeah, so we're extremely quiet as a company. I think we've been really focused on building something that works well, that is reliable enough to be deployed, and then getting that in the hands of customers, rather than sort of a bottom-up strategy combined with a lot of marketing. And some days I actually wonder whether we did the right thing. But what took us down the path of focusing on enterprise actually comes from a lesson that we learned last year, which is that for agents, the only thing people care about is reliability.

When you talk to a chatbot and the chatbot says something dumb one out of three times, you don't care, right? Because it's like you either enjoy the interaction or it helps stoke some thinking or something like that. But if you're trusting this thing to go handle shuffling data around in Salesforce, then it deletes like a third of your records. Like you're never going to use this thing again. Or at minimum, it's not useful to you, right? You could have just done the work yourself. And so we realized that...

that the only thing that customers cared about was reliability. And because of that, we decided to focus on enterprise because there's a lot of value there. And in those settings, we can control for very high reliability and get stuff out in the market now. We could put out more toys. Like we put out a toy called Experiment. So it was just for kicks in the last year. But I feel like that's not the path to glory in this space.

And in this stack of building Adept, you're solving a lot of hard problems, right? The foundational model that you're building yourself, the orchestration, agentic use cases, the UI and workflow. How big of a component is the actual enterprise integrations as part of that stack in terms of how your engineering team is spending your time and how difficult that problem is?

It's super difficult. I think like the other thing that I think most people won't say directly in AI is that very little of it is glamorous. A lot of it is about like, I mean, even on the model side, it's like data, low level systems issues, all that stuff. But on the customer side too, it's like you just got to do whatever it takes to get a deployment that

that is reliable enough for those folks to depend on at work. Like one of our use cases literally involves a physical truck being sent to a shipping container port. And if we screw up along the way, there's a truck being sent with no container on the other end. So that's really bad. And so what we end up doing is right now we work with the customer to get the reliability, but our whole

like a large part of our research roadmap is how do we make it more and more and more out of the box 95% from the get go. As you're building this business, how do you approach kind of the potential need or pull to do, you know, custom integrations or service oriented work to make it work for a certain company versus, you know, the long term vision of a generalizable agent?

I think this starts from a belief that we have that actually is informed by the machine learning side, which is that the best thing to do is to figure out how you can delegate generalization to the neural network.

So, how can we try to make as many customer use cases ones that, um, that sort of inform the ability of the base model to serve customer and plus 1 looks like an interpolation of 2 existing customers. You already have. Um, and so, because of that, like, we.

Our philosophy is like everything should be general unless you have to ship this thing tomorrow and then we'll go build a specific thing there. But we'll add that that customers work in a in a in a dummy data fashion to our evaluation set and then be like, all right, research team, how do we make sure out of the box the model can do well for that customer? And then over time, rip out the custom bits that we built. Yeah, super interesting. So maybe let's talk about the future for a second.

Humans are going to focus on their customers, focus on deciding what needs to be built, not actually building the thing. So in this world, what does software look like? There's systems of records, there's agents, there's legacy apps, maybe there's new types of apps. Walk us through what this like future agent world looks like.

Yeah, so that's a really astute question. And it's part of why for us at Adept, like we have really emphasized the importance of design from day zero. We've hired a lot of creative technologists, shape people, in large part because we knew that the steady state of agents is going to look like a reinvention of how you use your computer and how you figure out what that looks like has a tremendous influence on what your product shape is and also your actual modeling problems. So it's all one giant

game of co-design. Like my view basically is that like computers have always been about giving leverage to people. And it started out with very little leverage. You had a punch card, you were literally punching in programs that programs didn't do very much. And then you had the command line, which gave you, which was an interface abstraction that gave you way more leverage on your time and ability to do things. And then when we realized that we could

give people even more affordances through graphical user interfaces, right? Like we transitioned over to mostly doing that with the exception of certain specialized tasks we dropped back in into the command line. What that's really done is that for every unit of energy as a human, you can do way more in your computer you could ever do before.

I think what's really powerful about agents is it's the obvious next step beyond that, right? Like once agents exist that can control your machine, um, you, uh, only need to drop into the GUI to go do things that for some reason the agent can't do for you, that it's something you want to supervise. Right. And so I think to some extent, like, uh,

My analogy is like, you know, when Windows 3.1, that Windows 3.1 era, your computer boots up into DOS and you type Windows, you hit enter, and then you get into the GUI. I think we're in the very early innings of that transition for agents right now. And I think that ultimately what it will likely look like is that you become a coordinator and interfacer with an agent or a set of agents on your machine.

that you basically can basically work with. And it's almost like a generative UI way. Like the agent should be able to generate the right affordances for you to best collaborate with it on any particular task and everything else will be extracted away. Yeah. I'm very excited for that future. And thank you for, for building it. One final question on that. So in this world where we have, um, you know, several billion agents, right. You know, maybe several agents per person, um,

What are the missing infrastructure pieces for agents to operate in that world? I'm thinking, how do agents pay each other? How do they, you know, pass a CAPTCHA test? How do they understand how to, you know, data privacy, what to share with others, what needs to be built to allow agents to operate in this world? That's a really good question. I think the most interesting thing about agents is that

is that especially if you take the adept formulation which is that it uses your computer like a human or can do that in addition to apis um you kind of get access to the same rails underneath that we already have as people so payments can be handled via regular old payment channels right like we have to solve credentialing like how do you how do you safely share creds with your agent and have it be able to run either locally or in a vm or something like that like that's going to be tough but you get to use everything else like just like how things are today um

I think what's going to be really challenging is that I think the sheer amount of compute that is going to go to training and serving these agents is going to be colossal. And I think a big shift that's going to happen is when you are literally getting hours of knowledge work

that you would have had to do yourself done by your agent, your willingness to pay for ridiculous inference is enormous. And so I think what we're going to see is we're going to see people, I think it all again boils down to hardware. I think we're going to see people invest in ridiculous data centers, not just for running small edge models, we're actually running the smartest models possible on the planet. And I don't think we're anywhere close to saturating that. So I'm going to buy some more NVIDIA stock.

Yeah, I would probably do that. Okay. David, thank you so much for taking the time. Obviously, we're lucky and privileged to be investors in Adept. And thank you for continuing to push this industry forward. Thanks, Seth. This was awesome. Thanks for listening to Product-Led AI. You can find more information about today's interview and the entire series on the website productledaipod.com.

You can subscribe to the show on all major podcast platforms and watch the video version of this interview on YouTube. And if you want to get all the links and details delivered right to you, sign up for my LinkedIn newsletter. I'm Seth Rosenberg, and this is Product-Led AI.