It's not that I'm not using AI products. I'm just not using a lot of AI features in the products that I've adapted to or adopted over the past decade. This whole fixed ontology in a native age of unstructured AI, it just seems a bit silly. I don't know how to describe however the tone is that they have with Claude when it told me not to use chicken broth. It did it like with a little personality.
and a built rapport. But a word that I've been using a lot over the last year or two, which is just what is the altitude of your education?
editing? What is the altitude of your creation? I think we are still sorting through the core experience and like the delightful experience that is like the first step for this technology. So now they're asking the question, what happens when you come back the 50th time? Yep. And how do I expand your capabilities, help you become an expert in that thing?
Hey, everybody. Welcome to Hallway Chat. I'm Nabil. I'm Fraser. Welcome back. Good to see you. Welcome. Yeah, nice to see you. We're on the video, but it was nice seeing you in person at that offsite this week. That was good stuff. It was a lot of fun. So, Robin, who's the CEO and co-founder of Spark Investment Credo, pointed out something to me that I've then been chewing on throughout this week, and I want to share it with you and see what you come up with.
Sure, man. He is doing sales into both scaling startups as well as enterprise companies like Enterprise 500 and has a whole bunch of qualification calls that he's doing founder-led sales for. And on all of these calls, he asks, what product that you purchased before 2023 do you use AI features in? Mm-hmm.
And he told me that the only answer he has ever received to that is GitHub Copilot. And there's no other. And presumably he's like hundreds of calls in. Yeah. Yeah, yeah. And he made it clear that it's not that there's a sporadic amount of low volume answers. It's nothing. It's nothing. Yeah. Well, what is it for you, Fraser? Like of all the incumbent...
companies you could use. We're using lots of these, you know, Slack, email, Google Docs, Notion, Canva, Zoom. I'm just trying to be like, you know, off the top of my Jira, Adobe Creative Cloud, like, you know, off of all of those things. Are you using any of the AI features in those incumbents regularly? None, right? Like that's the surprising thing is that at first I pushed back on him and then I thought about it and he said, okay, which one for you?
And all of the obvious ones that if you had asked me a year and a half ago, what are you going to be using AI in? I would have told you, oh, one Slack has the ability to do channel summaries or synthesize all the content that you missed. I would have said that I don't look at those features at all.
The Gmail like autocomplete stuff or like write your emails. I just I tried it once and it's not something that I've ever returned to. I don't use it in Word. I certainly don't use it in Zoom or Google Meet. Or Google Sheets. Google Sheets. No, none of that. How about you? Yeah. I mean, the ones that feel the most mature and at least natural and implementation are Google.
Zoom meeting transcription and Adobe Creative Cloud, you know, I'm going to do fill, remove, like, you know, get this thing out of the background or using a diffusion model to help. Yeah. Yeah. I think Adobe definitely has a surface area which makes sense to integrate diffusion models into. Zoom makes sense to integrate transcription into. At the same time, like,
we made a counter bet in granola with Zoom. And I feel very happy and confident about that. Like, I don't think raw notes just transcribed or even some kind of meeting bot summary of the Zoom meeting is the right product surface area that people want. And similarly with Adobe Creative Cloud, like I do want a little box when I'm in Photoshop to maybe add a cloud or whatever, or do something simple.
But that's entirely different from the net new interfaces that companies like MidDream and other ones will do. And I think both of those can coexist. And if you ask me, which one will people be using more in 10 years? I wouldn't pick Photoshop, right? No, no. What about on the consumer side, Fraser? That's the beauty of Robin's question, right? Yeah, like Spotify, Apple.
There's a cool little AI DJ thing with the voice is kind of cool that speaks to you. So they're using like deep fake voices for a DJ that uses old ML algorithms to figure out what song to play next. It's been around forever. So kind of deep fake generative voices that works. That's something that's,
YouTube transcription again, Snapchat. There's a bunch of these social things that they just shoved in, you know, chat GPT into whatever social platform they're in. That's not interesting. No, not at all. Like the Spotify thing is like, I just couldn't help but laugh. I've never even heard that they've done that. It sounds so ridiculous. Oh, it's quite good. The voice quality of the DJ that speaks to you every day is impeccable. And they actually modeled after...
a person at Spotify and his voice. Sure, sure. It's fine, but is it, you know, this goes back to that framework we had a while ago, which is just like, maybe we need to like relook at our priors. You're a couple of years in. And I remember everybody saying, you know, post-
Gen AI, like give everybody a year. They have to ship. It takes some time. But if we have this adaptation, evolution, revolution, like buckets, which we talked about at length in a previous podcast episode, like, yeah, I would, I have been readdressing my priors. Like I figured the world would drop into 50, 50, you know, or maybe a third, a third, a third or something like that. But that incumbents would have,
lots of advantages that would mean that small adaptations would have them win. And I think that's exactly right. Yeah. It's more in the latter half here. Like I think we're going to get a lot more evolution revolution. Maybe that's just us drinking our own
VC whiskey on wanting there to be startup advantages. But it's like you said, like Robin said, it bears out in the uses so far, which is like I'm not using AI features in many. The Slack integration is not great. That's a very natural situation where you would think AI summarization, catch me up on everything I lost from yesterday. Right.
Would be good. Of course. I did turn it on for our Slack. It was not good enough. It was not helpful. No, it feels like an afterthought tacked on to the product. And I think that that's the really fascinating thing about the structure of Robin's question. It's not like which AI features are you using? It's which products that you bought prior to 2023 have AI features that you're using. And the interesting thing to me is you look at the
inference workloads that all the cloud providers are having, it says that there's a huge amount of activity happening here, not training, but inference. And if I think about my own use, it's not that I'm not using AI products. I'm just not using a lot of AI features in the products that I've adapted to or adopted over the past decade. Transcription is an obvious one.
But I'm not using transcription in Zoom or Google Meets, right? I'm using a lot of new products that have rethought the experience for this new capability. I have a counterexample. Yeah, okay. It's the one we're going to use in a couple of hours to edit this thing, which is Descript. Ah. But that's a little bit of a cheat answer since Descript was built with AI in mind already. Right. And so it was already post-revolution. It just happened to be earlier in this cycle, pre-2023. Yeah.
You know, that's a surface area where they surface new AI innovations on a regular basis. And I find myself attracted and trying most of the new AI features that they ship because they're integrated naturally into a new workflow and they just work. I was playing around with some of their new generative avatar capabilities. And I think I just use that product in a way that I don't personally have...
10,000 training videos that I need to translate to Swahili. And so like the hey, Jen's and synesthesia is I'm not the target market for that. But in terms of in terms of our workflow, there's one. Yeah, there you go. That is AI workflows. But it's it's it in a way it kind of proves the point versus not because it was. Yeah, it was a new workflow and surface area that was invented with AI in mind. Right. It was created with the idea that this is all going to happen.
I find that to be fascinating because I think, as you said, a year and a half, two years ago, we were sitting here thinking, you know, maybe it's 50-50 or certainly the first winners are going to be things that can just extend naturally. And I probably would have listed Slack as the thing that we're like, hey, this is great at synthesis and summarization and nobody likes the Slack overload. And that just hasn't borne out. Like that Robbins experience, talking to all these people, my own personal experience,
So then the question becomes to me, like, is this a new capability that's just not good enough and or is not going to get to the point where we think it could be profound? Like, I still think it's going to be profound. Right.
Or is it that it's a capability that is so new and interesting that you can't just tack it on to an existing product surface and deliver value that way? I think Slack's a great example because I had this experience this week. I'm working on a side project project.
with a couple of friends. Which one? We can leave aside what the side project is. It's a weekend thing. But it caused us to start a new Slack. We actually started a new Discord instead of a Slack, but for the purposes of the same thing. And I found myself deeply frustrated
with the experience because you just you know have you ever started a new slack or a new discord like you're like okay here are the five channels or the six things we think we're going to talk about and then i find myself like coming up with a new idea and then i'm like okay which of the channels does it go in and then it turns out a week later that we're talking 90 more in one channel versus another and so i'm like should i shut down that channel should i take half these top
and move them over. And it just occurred to me, it's like this whole flow is completely broken. Like I'm sitting inside of Slack and like, I'm thinking that why is there a fixed structure? Why do these channels exist? Everything that used to be structured just might not need to be. Every time you look at a piece of software and you see some piece of structure to it, the question is, shouldn't that be malleable now? Like why does Slack have channels? It shouldn't.
You know, it was there to structure all the information flow and to keep it categorized. But like, is that really relevant in a world of AI? Shouldn't I just open up Slack, type in a message, click maybe one button, public or private, and then just hit send. And then shouldn't AI figure out where it goes automatically? Who sees it? Yeah. Yeah. Who sees it? Blah, blah. I think this needs to go to these people based on your past context and experience. And then
When it comes to channels, shouldn't it just be like dynamically adjustable anyway? Like, you know, in other words, if we realize we need to add frequently asked questions channel, I should just add that channel. And then it auto populates from all the other channels, all the questions that were frequently asked. It knows the context, right? Another example is Notion. Every time I open Notion, I feel anxiety.
about page management. Where should I add something? Should that be a new page for this concept or should I just add it to the end of the page? What if it's split between two pages? The concept's a little bit of this and a little bit of that. This whole fixed ontology in a native age of unstructured AI, it just seems a bit silly. And I don't know if that's the truth to the response of the
question of like why AI products are not being more used, integrated into the products that you're already using. But I think it teases at something, which is just, these are core capabilities and maybe there's just way more categories than we think where the answer is just to start with a new workflow from scratch with AI in mind. Mm-hmm.
I can get my head around that. It's so new, and the history of our products has been built up to actually solve for the fact that this doesn't exist. And so we have to put in scaffolding and structure from elsewhere, and then this new capability is somewhat...
adjacent to or orthogonal from these product services. Sure. Yeah, I can get that. So let's try then testing that hypothesis. You said that the one product that has adapted AI in a reasonable way today is Photoshop. Yeah. And...
Listen, I believe that MidJourney and others, like there will be entirely new editing experiences that will come up. But why do you think they've had success in the short term? Whereas like Slack hasn't. Like certainly I can't imagine anybody's using the Slack AI features. But reasonable people are using like the diffusion models within Photoshop. There's clearly use of...
of the product in that workflow? And I guess you're saying like, are there lessons there that might indicate categories where incumbent advantage will be okay? - But here's like, here's my initial thought is,
It's because that UI is already set up for like discrete tools where you're like, I want to now do this and you click into it. Whereas Slack isn't. So you're just adding a tool. A new tool. And it's not a brush, not a draw rectangle. It's a, you know, fill in the background with clouds. So, but if the toolbar method works, then why haven't we seen better integrations in Slack?
Google Sheets or any of the Office stuff. Right. Those are all, you know, tool ribbon oriented UIs. Great. I don't know. I don't know. I don't know.
Yeah. Good pushback. There's a world where some of these things do get integrated, obviously, like spelling, grammar check and deeper versions of that. Like write my paragraph for me and shorten or lengthen those kind of like simple affordances that are about the doc itself. I love the Photoshop example, not because of the thing that it makes me think of is not is not tools.
but a word that I've been using a lot over the last year or two, which is just what is the altitude of your editing? What is the altitude of your creation? And I think the thing about most tools, if you just think about what Slack is, there's very little channel management.
There's very little that operates outside of the I'm trying to interact at the absolute ant level of interaction in this product. I'm on a Notion page. I'm on a Slack page and I'm just there to type is where. Right. And what do I type and how do I type is a lot of the work that was done
15 years ago to build up these products. And Photoshop is by its nature, like you do some work at the pixel level, but by its nature, it's like loose and at a higher altitude, right? It is, I take a brush that says lighten and I go across the entire sky and, and suddenly like it makes some judgment decisions about a huge portion of the work.
There was a phrase a couple of years ago that someone said, which was just like, where's my Photoshop for text that I've always thought about as just a way of thinking about attacking the problem at a different level, right? I don't want something that maybe takes my paragraph and takes away four words from it. Although I use that, but that's fine. It's good. I still use Grammarly for that actually, just because it's portable and it can be everywhere I am versus using it in Notion versus somebody else. It's fine. But what I really want is...
analyze my whole text and make it all a little bit more clever or what did I do? I do a lot of investigative work now, right? It's what are the gaps in my thinking of this piece? Where did I not come to good conclusions and test my conclusions? A lot of that kind of stuff, which is at a much higher altitude.
And maybe the similar answer to my missive about Slack, which may or may not be right, was again operating at the channel level and the sending level was just, again, at the infrastructure level, how could this be different? The conversation with Meter was similar. He's trying to think at a much more infrastructural level about how you would do this work versus at the micro level. How do you fix one router? Let me toss another one at you then is...
I know that when they launched, Granola launched around the writing is thinking. And so they wanted to have an AI note taker that you could also write. Interestingly, I did that a lot at the start and now I don't. I think that the transcriptions have become a lot better and it just runs in the background. And then it's there when I want to access it. I'm not purposely building notes and adding structure to it. But then they just launched this new feature where when you're in a note...
for the current meeting at the bottom it pulls up past conversations that it thinks are relevant right and like i think it's just matching on meeting guests or something else like that but it's the the scaffold of you can imagine that it's just going to pull in increasingly relevant content for the conversation that you're having yes and it does that in an automatic way it does but the control mechanism is with you in other words if you trust it and you leave it
then it will make decent decisions and you can direct it if you need to. And I really think that what AI is going to teach a whole generation is basically how to be a good manager. You know, like you call this like the allocation economy is what Dan Chipper phrases. I don't know if I love that, but it is working with AI, I think,
Look, less than, I don't know, less than 3% of the population, 5% of the population has any experience being a manager whatsoever. And I look at the way my kids interact with ChatGPT, and they are learning how to instruct other things to do good work at a much, much younger age, and they are doing it every single day. In fact, what I would wish that AI would be better at
My ask for 2025 of AI is, if we're supposed to be better managers in order to manage AI, then AI just needs to be better at managing up. Like, what are the basic principles of managing up? And then how do you integrate them into your AI product? Should be a offsite topic for every AI company. And when I think about the conversations I have with people who are learning how to manage up, I talk about things like,
You have to understand the context of what your manager is actually trying to achieve. You need to develop like adapting to the manager's communication style, right? This is why we get these like documents about how to work with me and stuff like that. And then you have to anticipate your manager's needs and know how to ask questions they didn't ask, right? Like you have to know the gaps of the areas where they're making assumptions and then know to ask for clarification or depth.
And then you have to know periodicity of check-in, like how anxiety-ridden or not anxiety-ridden, how much safety is needed in this relationship at this point in time. Those are all things that are not even bothered to be built into an understanding of how AI is supposed to interact with you. And some future version of Granola, I hope...
It understands with me the types of meetings that I'm in. It understands when it needs to take really deep notes and not deep notes. And when it's not sure, it asks for clarification. Right. It knows how to pull the best out of me because I might not know how to instruct it properly. Yep. Yep. It's not assuming I'm a great manager. It is trying to pull the right things out of me. And I think I see almost none of that in the market right now.
I love it. I completely get what you're saying. I've never heard this before. I will tell you, though, I don't think anybody should be caring about that today. What do you mean? I had this lovely Sunday last weekend where my only takeaway was...
I have thought that this thing called AI was going to be pretty big, and I thought it was going to be pretty big for a long time, and I still think we're- You made a pretty good life bet on that, yeah. Repeatedly. You and I have to believe that it's going to be big, and I came away from that Sunday thinking that I'm still underestimating how big it's going to be. Was there something that happened? Yeah, yeah, yeah, yeah. I switched between Claude and ChatGPT because it's just a confusing situation for me in my life right now. And so I'll say Claude, but it could just as easily have been ChatGPT.
I'm making a roast. It helps me make a roast. I've never made a roast before. I'm intimidated. And I will tell you, I felt like a level of rapport and camaraderie with this thing as I was using it to make a roast. It sounds so strange for me to even say that. And then I'm tending to an olive tree that we have indoors that's causing some issue or has some issues. And I just switched with it over to that task.
And then that evening I sat down and I was writing a memo and it was, it was again, like a companion right there helping me write the memo. And all three of those experiences couldn't have existed two and a half years ago, two years ago. And it was amazing.
And I still don't think we're anticipating how much this is going to change our day-to-day life as we figure out how to deliver these products into them. And as we, as humans and users, start to embrace these products and understand when and how to use them as well. It has to go both ways, right? And so what I meant by way too early in your comment is I think we are still...
figuring out the first step of how to deliver like the product experience that's magical to these. And I think that what you said is going to be amazing, but it feels like that's like two or three steps from now. Yeah. Right. And like, let's not push things to the end user today because we still haven't yet figured out the very first step of delivering the product experience that's most delightful. Yeah.
One thing that you've spoken to me about a little bit over the past couple of weeks is how to think about when to prioritize what type of work. Are you prioritizing growth? Are you prioritizing retention? Are you prioritizing the core experience? And I think we should get to that because it's super interesting. But what I mean by this at all is I think we are still sorting through...
The core experience and like the delightful experience that is like the first step for this technology. We are talking primitives. I don't agree with you. That's why we do this. Yeah. I think these things are interrelated in a way that you can't set aside.
So my rant earlier about why don't we have an unstructured Slack product kind of thing as an experiment in social right now, one could argue that we just have more basic things to get done, which is a little bit of your argument. Like we're just super early. But part of the reason I came to the idea just live when we're riffing right now about managing up is that
Let me take a different angle at this. There's this old framework about what does it take to become an expert in something? And the framework is that you have to have a valid environment for operating. It has to be somewhat ordered. So you have to have a set of rules that are fixed so that I can test something. You have to have timely feedback. So there has to be some feedback loop to that set of orders so I can figure out what I'm doing, good or bad. And then you have to have deliberate practice against that. Valid environment,
ordered environment, timely feedback, and deliberate practice, that uncomfortable bit where you can feel the edge of something. And if you just think about how you learn a language or how you learn tennis or how you learn coding, like they all fit into this period. I think if we think about operating inside of ChatGPT or Claude and you think about your roast example, the problem with the horizontal nature of products like these is that many of the things I just talked about don't exist in
Like, it is a valid environment. My feedback cycles are the biggest, most positive thing about chat in the GPT for us. Remember, GPT before chat GPT is the timely feedback. I get very quick feedback and I have it in a chat format thanks to you and that team and so on and so forth.
But when do I do deliberate practice with this tool? Like I never do deliberate practice with this tool. And the ordered environment doesn't really exist for it. And so that's, I think, where my brain was kind of loosely going when I was saying in a world where we do not have a way to get better at this stuff, the way that we get, say, better at programming, then what you really need is...
a coach, you need coaching up. Like I was trying to think about other unordered environments where timely feedback is iffy. And I was just thinking about the organization. Like I need one-on-ones. I need a manager to come to me and tell me about how I'm doing and how I might change my behaviors and so on and so forth. And without that, I don't think we'll become truly expert in this field. And that happens at the level of Claude.
But maybe just as importantly, it also happens at the level of an individual product.
It happens at the level of becoming really good at using Descript or really good at using Granola over time. Ideally, the right version of a Granola or a Descript or any of my native AI tool frameworks is that it's got that easy to learn, hard to master, open to expression kind of feeling where it's really quick to joy and magic, but
two years later, I'm still finding and navigating through the possible ways that I can use this product. - What would be an example, like a suggested follow-up? So Perplexity has like some, here are some follow-up questions you can ask. Like if I'm cooking a roast and kicking it with Claude, it would come back and be like, would you like to figure out, you know, the sequencing to make sure that this is set on the table by X? Like, is that a simple step in that direction?
I'm thinking one level above that, which is almost, I kind of imagine, like, why aren't we doing postmortems? You have a month-long project in Claude where you're working on something forever, or you chat with it 25 times to get to the right answer. And then when you're all done, you have a postmortem session, which where Claude is saying, by the way, you know, if you would phrase this a little differently in the beginning, I would have gotten there like 10 times faster. Yeah.
I'm trying to make you better at using Claude. I'm trying to make your efficiency and understanding of how to model, how to tell the model what to do to be better. And I don't think that happens without some measure of reflection after session.
Here's a good example that's a little different than this, but a thing that I've loved doing, which is now that we have a context window that's quite large and we have some memory, once you've gone through a really long session with ChatGPT, if you just go find a session you've had going for a really long time, you should go in and ask questions like, what questions have I not asked that I should have? And it's so good at that. It's insanely good at that.
in a way that is closer to the feeling of clairvoyance and intelligence than most of the other things we talk about. And by doing that, if we get back to this kind of like, how do I become an expert in something, that's timely feedback. And then ideally, I go back the next time I'm doing it and I internalize those notes or those thought processes. I hear you. I agree with you. And I still am struggling with it because one of my world beliefs is like, in this industry, we all forget
How inconsequential it is to people. All the things we do are really not that important. Well, like they don't want to do a postmortem. Like their life is busy. They don't want to go back to Claude and give it feedback. Okay. Frazier, you're right. You're right. And let me try and contextualize this a little bit because I don't think we disagree here, actually. Okay. I think what you're talking about is like, where are you in the product life cycle? Yeah. I had a board meeting this last week.
where we were talking about a product and it's doing well. They had a list of all the features and AI features they wanted to do for the future. And we're kind of having that kind of very base conversation about strategy and prioritization and so on and so forth. And, yeah,
I ended up just trying to contextualize for them at a very high level. Just like, okay, what are the problems we're trying to solve, guys? And I'm reminded of a framework that I kind of disagree with, but was useful in the moment, which was Dave McClure's old ARG framework. We've now been doing this long enough that nobody in the room had ever heard of this framework, so I seem like a smart person for a moment. You get a look of recognition there. Do you want to explain what ARG is for a second? So Dave McClure, way back when...
There was optimism and positivity online, shared a framework around how to prioritize or how to think about product. I think we were both founders at the time, right? Yeah, yeah. I think the phrasing of the post was something like how to grow products like a pirate. Of course, yep. Acquisition, activation, retention, referral, and revenue. Arrgh.
And I think his point was basically, it's a funnel view of the world. So you acquire a customer, they arrive on the website, they then get activated, they sign up or whatever, they get retained, they then refer to other people and they have revenue. Like any cute framework, I have issues with referral should be virality because that's what you really mean. But whatever, he needed to make it say R.
And it was a way of just organizing the world of features and ideas. Like, hey, do we have an activation problem in this product? Do we have a revenue problem in this product? Why are we talking about revenue features if we really have an activation problem? That kind of way of just organizing the world. I actually think linear funnels are not the right way to think about these things at all, especially in the world of AI. You're going to need all of these things. But I think the core I would use is something like you need...
that magic moment in the product. You need the product to be great.
You need retention, which is related, but it's sometimes a very different thing. You need expansion, growth, and you need monetization. So I'm not trying to invent the prem framework or anything like that. That's what that product retention, expansion, monetization, prem. Yeah, I've got my self-help book on preming coming out very soon. But I was surprised how elucidating it was for the conversation. And what you, I think, were the point you're making just a second ago is that
We can't be having conversations about retention and expansion
you know, me putting in the work to try and get better at this product when the product itself is still not magical enough. Like people are impatient. They want the job to be done, to be done quickly. You're still trying to solve a product problem. You are not trying to solve a retention problem yet or a becoming an expert at the tool problem yet. And so like, you're just solving the wrong problem. And that's, I take that point. You know, I do think the market generally is still in the,
there aren't enough magical products. Look, we started this podcast with the idea that every week or two, we'd sit down and talk through a new AI magical product experience. Right, it's hard. And whether we invested or not, just like, let's talk the craft.
And like, we don't have something to talk about every week. And we talk about them when we come across things and we talk about that structure. But I think you're right. I think more people need to be focused on product and those magical experiences and what I was thinking about
Probably because I was having a conversation about granola and I was having conversations about Descript and some other products where I actually really think they do have that magical moment. They have that thing. And so now they're asking the question, what happens when you come back the 50th time? And how do I expand your capabilities and help you become an expert in that thing? And that's maybe where the conversation came from, which is a different prioritization conversation than you see when you're looking at the broader market. Yeah, yeah, yeah, I get it. Both of those things are true.
All of that makes sense. Are there any new products you've used lately in AI? I mean, there's a lot of stuff we're trying every week, but I mean, something that's turned into a habit that you've picked up. No, I mean, like, yes, depending on like how we want to frame it. Granola, I use multiple times a day, every single day. And it is delightful. I mean, it's not new, but I continue to use chat GPT and Claude to a degree that is continuing to surprise me in ways that are continuing to surprise me. How about you?
I find myself using both Wordware and Replit. I know the agent framework that they launched isn't as deep as some of our other friends doing AI coding. And I know that I can AI code inside of even inside of Cloud or whatever. But the simplicity of the already live environment and type it in and iterate very quickly is
I like what both of them do in their own way. And I find myself building small tools with them all of the time. Nice. I think whoever has the winning horizontal AI product is going to be as big or bigger than Google. I just keep getting drawn back to Claude. It is frustrating in a way because you want to have a portfolio company. It's not a company, but-
I sent it my health information. I sent it the photo stuff that it can then infer against is amazing. I took a photo of a plant and I'm like, help me rehabilitate this. And it's like, okay, here's the three things to do. It's amazing. Maybe the real answer is actually this is not a fight between the old incumbents pre-2023 and the new startups.
The old incumbents will do what they do. They will be the way of FM radio, which is to say that they still will exist in 50 years, but are much less relevant. This is a fight between whether any new startup...
will be able to carve out a strong enough new workflow and use case that I wouldn't just use Cloud for it or ChatGPT for it. Like, that's the war. - Yep, that's it. - 'Cause every time you come across something, you're talking about whether it's a strong enough new pattern of behavior that I wouldn't just like type it into this thing that already is in my life and is already in my pocket and already has my context from our previous conversations. - Yep.
Thankfully, we're obviously investors in one of those, in Claude. But the answer is not deploy the whole fund and all of our founder energy in the market into one of two companies, is it? It can't be. No, but I do think that we should then be on the lookout to see if there's another assistant, like broad horizontal play that has a different worldview that is opposed from Claude, which is to say, I don't know if this market's won.
The only thing that I have in my head from this past week is I think I am guilty of underestimating how big the broad horizontal product is going to be and how ever present it's going to be in our life and how early we still are in that cycle.
I don't think any of the stuff that OpenAI did in the last year was right. And this is where I was coming with you. Like, I don't think they should put in those rails yet. Like, I think that they should just allow it to be as good of a horizontal broad product as it possibly can be. Like, if you and I are still just surprised at how often we can come back to it for value, it's just going to like ride the curve. So two points. One, that was my managing up comment though, Fraser. My managing up comment is that that is the fault of the product.
that a good product helps you understand how to use it better over time. And right now, the affordance of the way that we use these products and the way they speak back to us do not help make us better. They do not make us experts over time. For giving the number of hours that we have spent in these products, and especially given you were actually also on the builder side for these products, Fraser, we should be experts and we're not. I would fault the product for that, not just the market.
The kind of second point is, I think a lot of people are trying to attack generative AI from a really verticalized approach. So what is a vertical SaaS way to use this for finance and operations or for blah, blah, blah, blah. And there's some great success cases there. You know, I'm using this for medical scribes and you get a company like a bridge that is doing incredibly well and so on and so forth. But
You said something that I want to reflect on maybe for the future, which is like, are there other horizontal assistants that are like for what set of customers in the world does Claude Chachpti as a structure just not feel like the right horizontal assistant in your life that is there every day?
I had not really thought about that before. The Dr. Scribe example is that. It is an everyday use case. It is broad to that person's career. I'm sure as they build out capabilities, it will do more than just transcription, of course. But it's a horizontal, not vertical view of the problem that you just mentioned. Because of course, that's what
generally as good at is tackling broad long tail things on the managing up i i get it i agree like i agree with you i think it you did the right job of pulling it back to like where then are we on the product area like i would be so cautious with forcing stuff too soon on that if i was still running the product like because the aspect that it becomes clippy i think that's
a very dangerous place to go to too soon because the cost of getting it wrong is so high. I didn't even think of using it to tell me like, when did I adjusted my recipe for my roast three hours ahead of time and had to just like tailor it perfectly to what I wanted to do. I'm like, I want to eat at five 30 and here's the stuff that I have. And here's the actual photo of the meat cut. Tell me it all.
And if it had come up with like a suggested reply that was like, you know, 10% chance it would have gotten what I actually wanted and it would have been annoying. But Frazier, that's literally the problem. Like suggested replies, knowing these companies and how fast they're running was probably...
one dude for two days and built it. That's right. Okay. And shipped it. And so of course it's bad. Yeah. Okay. We're saying the same thing then. Right. And my point is like, actually the product vector that might need the most improvement. Right. Sure. Is, is, is related to suggested replies and,
And maybe some structure and ontology on top of that that helps teach me how I'm supposed to think about what replies I'm supposed to be giving. Why did these suggested replies come in this way would teach me how I might want to write in the future. Fair. But like memory was clearly something that deserves to exist in the product because it will make it better. But you can't put one person on memory and then like ship something that's janky. I don't know. Do you use the chat GPT memory featuring drive value from it? I certainly don't.
I use it in advanced voice mode sometimes because I will say something and then I will say, I want you to remember that.
Uh-huh. And like, is it useful down the line? No. Well, I have not found the other side of that use case. Yeah, of course. Like, no. That's the thing where it's like so obviously important, but if it's done poorly, it's just going to suck. Done poorly or too soon or not enough resources or not thoughtfully enough. The other observation that I had from this is I used to think that like companion was hokey. Like I used to just think like, why personify it? This is all hokey. What are we doing?
And I'll tell you, I wasn't at the her moment on Sunday, but I was like, this is kind of my buddy. Like, this is my buddy. Like, I don't know how to describe however the tone is that they have with Claude when it told me not to use chicken broth. It did it like with a little personality and it built rapport. And I was like, interesting, interesting. All right. Should we be done? Yeah, I think we should be done for today. Frazier.
come away with new thoughts and this is added value no matter what anybody's thinking about or listening to. I don't know if anybody's listening to this or not. As usual, Fraser, I feel like I come away smarter from chatting with you. Thanks so much. Cool. See you.