There's like almost as much of a chance that you mess up the thing that you have than if you make it better. I think the reflex for most AI products is like automation of a workflow.
which necessarily requires a new behavior. My bar to try a new product is exceptionally low. Like I said, great joy to go and tinker and explore. And then as Nabil knows, my bar to stick with the product is exceptionally high. But I do think you're building a fundamentally different product if you are trying to replace what someone's doing versus just superpowers.
Hello, everybody. Welcome to All The Way Chat. I'm Nabil. I'm Fraser. Welcome back. And today we have a special guest. We have Chris. Hey, Chris. Hi, everyone. I'm Chris. Chris is CEO of a company called Granola, frequently on the short lists of AI products that you use every day, or most everybody who's using them. So if you haven't used it, you should try it. And it has some pretty different product philosophies. And so we thought we'd have a little product conversation with Chris today.
And from there, who knows where it's going to go. Before we get to product, Chris, we have this topic that Frazier and I've had sitting on this list for hallway chat for a while of things you want to talk about. The periodic thing that happens every, I don't know, couple times a month. Former founder comes to you and says like, okay, I've shut that thing down. I'm now thinking about what it is I'm going to do next. But like second time founder thing. And I've had that conversation. I don't know how many dozens and dozens of times over the years. It feels fraught.
Because you're going to maybe over-rotate from the last thing that maybe didn't work or maybe know too much about what you're going to go do the next time and how hard it is. I don't think we're going to do the full bio and how granola happened, all of that stuff. But I think if I started a new company tomorrow...
It would be a really weird and hard road to have the courage to look at all the AI note-taking apps out there in the world and then be like, you know what the world needs? One more. One more. So talk a little bit. Can you just talk about what the journey was like for you? Sure. Yeah, yeah. I'm happy to talk about it. It can be useful to hear people's stories. Like my formative experience was the last startup I did and that company was called Socratic. It was...
an AI tutor on your mobile phone and it was aimed at high school kids. They were just talking a homework problem. You take a photo of your homework problem and the Socratic would try to teach you how to do it. This is
The previous wave of AI, like we were like, our AI was linear regression, you know, models. It was like a very, very different type of thing. It was quite successful from a product and usage and growth standpoint. Shantz and I, we ran that company for five years and it was acquired by Google and it had something silly. Like when I left Google, I was getting 4 billion queries a year, which is a lot for an iOS app that's, you know, primarily in the U.S.,
I think every founder has scar tissue from their, well, a lot of scar tissue, like scar tissue all the way down, but like, you know, company specific one. And what we didn't do with Stikvatic is we really didn't, we were just one of these, like, you know, we're going to get, we're going to get huge and then we'll figure out the business model later. And while that is, I think sometimes a very good strategy or can make sense,
For me personally, I knew in my next startup, I wanted to work on a product where it was very clear who was going to pay for it and why they would pay for it. Whereas in education, I think the main mistake we made is that we built a product for students. But in high school, the actual person who would be paying for it is the parent. So I knew that.
Going into this one, how did you pick a category? I talked to other second time founders and they were way, way, way more analytical about it and process driven. I talked to one guy and he spent a while exploring and he came up with 10 ideas. And I think he spent like a month on each of them and basically saw how far he could get in a month and what the pitfalls were. And then he did a crazy analysis of what to choose.
But you can over-index on what you know. And the reality is you know 1% when you start. And the whole space will be defined by the 99% that you don't know and will only discover as you work on it for multiple years. So I think there's a bias of the known versus the unknown, which can be very dangerous. What I did is, so I Google bought Socratic and then I quit Google knowing I wanted to do a startup.
And I didn't have an idea and I didn't have a co-founder. And it wasn't a hard date, but I gave myself a year to explore. And I wasn't looking for a startup idea on day one. I was looking to play. I'd been at Google and been running a team or been busy and become a father recently. And I had none of the headspace that comes from that
creative exploration that has no specific goal at the end of it, if that makes sense. Yeah, there'd been no open-ended play. You're very deterministic. You've got OKRs, you're at Google, you got a thing to hit. Yeah, exactly. You hadn't wandered. Yeah. So here I was like, okay, I just need to play with stuff because usually the real signal comes from when you're like messing around with something for a while and you can kind of build these intuitions. And like, as luck would have it,
like two weeks in, I just started playing with GPT-3. And what had happened is that the Instruct version had just come out and I was instantly hooked. But from like a
Oh, this is different. Like most people in tech have had this experience in the last like three years at one point, right? Where you're like, you play with this and you're like, oh, this is different from what came before. My mental models break. Like what's possible? My intuitions about the technology are way off because I can write like a paper that's pretty impressive, but it can't be basic math. Like what the hell's going on there? You know, all that stuff. And...
I just basically built shit for myself and to play around with projects. Were you doing a lot of interviews at that time that led you to something like Granola? Were you taking a lot of notes? Was that part of research you were doing? No, no. Conversations with your kids? Wait, wait, wait, wait, wait, wait. Before we get there, what's like one of the silliest ideas that you explored and then tossed away? I'll tell you the main, like the main one. So it was like back to this, like wandering in the desert thing. I was like, okay, I just quit my job.
I'm living in London. I don't know that many people here. I want to do a startup. I don't have a co-founder. I don't know what to work on. Like, there's probably a lot of
processing, thinking, exploring, you know, and instead of getting a therapist, I was like, okay, I'm going to journal. You talked to Jeff Chappie about it. Yeah, yeah, exactly. But like what I did is like, okay, I'm going to journal and writing in a journal is kind of slow. And, and I was playing with LLM and I was like, man, I wonder if I could just talk to, like, could I just talk a diary or a journal? And I think there are these like staple ideas that everyone builds as they're like, oh, I should do, you know, I want to do an AI show or whatever. I think this is like one of them. But again, I didn't,
I do it as it started. I did it for myself. I built this really shitty iOS app where I would press a button, put on my AirPods, walk around London for 10 minutes talking to myself. But luckily with these things, I don't look crazy. And then it would transcribe it.
And then it would turn it into a journal entry, right? That's like the first version. And then like the really cool stuff was now like hyperlink every proper noun or name of a person. I want to see everything I said about my daughter. I want to see that on a timeline. And this is the kind of thing where it's like, it's one API call and you can be like, oh, pull out sentiment or pull out the core ideas here. And then it was just like, you realize what you're really messing with.
is the structure of knowledge and information. And that thing is like malleable and stretchable. And that's what LLMs let you do. And so from this like ramble, from walking around, you could end up with specific and distinct ideas.
Anyway, you get it. This was all good to play around. It was super fun. And at some point I looked up and I'm like, is this a company I want to start? And I was like, hell no. As much as I'd love to build this tool, and it's interesting, I thought it'd be really challenging to build a business around it. I think there will be one. But a couple of years ago, it seemed like much harder than it is today, I think.
For what it's worth having heard that story, you said at the beginning, it's not that applicable, my particular story. But I actually think a lot of folks who start their first companies go in with a kind of like raw naivete. They were playing in college or just afterwards with some friends and they go build the thing.
And then even if they just made millions of dollars, they come out and they're like, oh, no, but I'm so much smarter now. I would do it differently. And they often go down the deterministic approach. And I think it's completely wrong. Like, actually, the number of people that have done the thing that you mentioned earlier that your friend did every month.
or I've had friends who did, I'm going to write an investor memo as if I was a VC before I start the thing. I'm going to write 10 of them and then I'm going to go through the process and I'm going to take a year. And I have lots of folks that have gone through that. That feels like a pretty bad way to find net new things, unless it's a really specific kind of startup, which is you are trying to go after market arbitrage. Like you were trying to look for a seam in the world and then build some slightly better version.
Which is very different. It's faster horses versus disruptive thinking. We've known each other for a long time. Socratic was also a Spark company. And I think of you as a founder who is like going to wander and then like actually pull a seam and see where that leads you. I can see the connection between your journaling app and Granola, to be honest. We don't have to go through every moment of that to know that, oh, the thing that you did there is just listen to yourself. You got to a spot.
And then you didn't stop. You didn't say you weren't satisfied. You're like, okay, so this is good. Can we get better? A little bit further. Okay, this is good. Can we get better? And you just keep going. I've actually thought about this a little while, but what's amazing is that when you're actually doing a startup,
It's almost impossible to have the time to do that exploration. And oftentimes the newness, the core insight or signals that you're going to build on, you have to be in that state to look for it. It's a hard thing to be able to do both. And actually, I was lucky. I had the luxury of a year to explore. Now that I'm busy building Gridola, I'm kind of like, man, I wish I had more time and space to really explore and think deeply about these new things. Because I think especially in AI, because it's so much...
newness, like new things to invent, that approach and that perspective is super valuable.
Well, I hope you find a way to stay open to it. I think that is the fight because it isn't just one gun that went off at GPT land, as you know, like it wasn't just a thing that happened three years ago. And so you encapsulate your worldview in amber and then go make your bet and run with it. This is one of those situations where like it changes again and it changed again and changed again. And, and so, yeah, I don't know how you fight for that. If you, if you figure that out, please let us know. Cause we will tell all the founders. Yeah. Yeah.
I just with great joy listening to that story. I love those stories. I could listen to those types of stories all day long. The idea that you had given yourself a year is part of it. But then the idea that you were both disciplined enough to build and experiment and explore and then malleable enough to sit with it in like this philosophical, what is this? What do I have? You can see like the dots connect right to what you're doing with granola, right?
But like if you had set up, okay, we're going to run this experiment for one month and then move on. I can't imagine you end up in that mind space. Yeah. So I think,
Like product's tough, right? It's like the left brain, right brain. You kind of have to, you have to do it all, which is difficult. And I think an important part of the story is that once I met my co-founder, Sam, and we decided we were going to prototype stuff together, we kind of took insights that he had and insights that I had. We fleshed out different directions or ideas. And then we showed it to like 30 people. And I think that is the other side of the equation, which is super important.
So I think it's like, you need to build your intuition. Like, what do I know? This is from my limited life experience, right? But you build an intuition of like what you, what you want in the world, what you would use, like what you think is important. And that needs to be like pretty deeply felt. And then it's really, really important to put that in front of a lot of people and look at it from their perspective. And out of the, I don't know, a bunch of ideas we, we shared with the folks, like,
And people's eyes glazed over on all of them, except for the granola one. And it wasn't exactly what granola is today, but it's the same idea. And then Sam and I were like, I expect we're making another note-taking app. That was the ethos going into it. We're like, all right. Because literally their eyes sparkled. And I was like, okay, this is the one. So I think you could get lost here. Well, there's a product philosophy to this product.
that it's strange to me we don't see more often. Every other note-taking app does exactly what you expect it would do, at least historically. I'm sure they'll all copy you very quickly and we'll get to there. But they do the thing you would expect, which is like almost the same as your journaling app. They take your rambling conversation and then they spit back out this table of contents summary, which is never quite good enough. It's okay, but...
You remarkably never find yourself ever reading those generic meeting notes again. It's just done. What you do is different.
We've been hemming and hawing for a little bit about what do you call the way that you built this product? What is the Malcolm Gladwell version of explaining this product philosophy? Because it looks like a notepad. It's a product philosophy rooted in invisibility and flow. I think what's confusing about Granola is that today, what Granola does is that it helps you take notes in meetings. And therefore, it looks like all the other AI meeting tools.
bots or apps out there. That's not really how we think about granola. We all kind of look like the same thing, but I think we have very different trajectories. The thinking behind granola is at its core is like a tool for you to do your thinking to do more, right? And that might seem like semantic, but I think there's like a fundamental difference in terms of like what we're doing. And so my co-founder Sam, he came from the note-taking tool for thought world as well. And he had a bunch of thoughts on this. And
I think when you set out to build a tool that at its essence is really, we want to be like the next version of paper and pencil. Like there's a bicycle for your mind. It's like the motorcycle for your mind. Meetings are a very convenient place to start because first of all, transcription in meetings, it's a pillar feature of all that. They can take a rambling transcript and make something useful out of it. It's an easy moment to build a habit around.
because there are meetings scheduled, we can send notification. It's really, really hard in product. But that's not the end all be all. We might focus on meetings for a while because there's a lot we can still do to make it better. But we really want to be in this place where we hope you do better work, get better thinking.
You first started by calling it something that feels similar to any of the other AI note-taking apps. And I would say the thing that I love about it is that it's not. You held up a pen and a paper, and I was going to say it feels far more akin to pen and paper or Apple Notes in the sense that it's...
It's familiar and it doesn't try to over-promise on the technology side. And I think that there's an awful lot of discipline and product craft that has to go into getting it to that point. All of the complexity is hidden away from the end user. Yeah. If you take this core belief that, hey, we want to be a tool, like pen and paper, that it just works, that people can grab, that doesn't get in the way, that makes them better. And you really think about what it takes to do that.
I think you can map back a lot of the perhaps kind of weird or unexpected product decisions that we made. Like the fact that it's a Mac app. I think that was like something very different. Like it's an app that's on your computer. And like the thinking there is like, it needs to be really easy to grab, right? Like it needs to be as easy to grab as your notepad. And it needs to, like if it's in a tab in your browser, it gets lost. Like this is literally something that happens. Like you try to take notes in the browser tab and people have too many tabs open, you'll never find it again. Or like,
you need to be able to use it for any meeting you have. If you have to think about, "Oh, this is a Zoom meeting, it's going to work, it's not going to work on a Google Meet meeting or a Hangout." Or, "You know what? This in-person meeting." There are all these requirements. You have that lens of what's it take to be on the present tool.
that I think like Granola makes a lot of sense there. But if you look at it compared to, why don't you build a Mac app that only a tiny percentage of the world can use? You know what I mean? And like when we started it, only people on Mac OS 13 could even use it because before that you couldn't get system audio. So it was like a tiny, like little sliver of humanity or like, it seems like a dumb idea, but it'll work out. Something about the way you just phrased that
The reflex for most AI products is automation of a workflow.
which necessarily requires a new behavior. So you can imagine the canonical, oh, I want to make something a little bit faster. And so I've got a bunch of if-then statements popping up in LLM and doing a thing. And I want to automate that. And it's a very engineering mindset. And I need you to engage in a new behavior. You're going to read table of contents transcripts after you finish a meeting. That's your new behavior because we just ran this workflow. And I think in a way, what philosophically you're talking about is like, instead of automation of a workflow,
which requires new behavior, it's augmentation of a current workflow, which you stay in flow about. And so it's the same behavior, but better.
Right. It is the, you know, equivalent in code gen for AI of Devin, which is going to go off and do a whole bunch of things. But I feel like I have a distinct loss of control and maybe we'll see if it'll work out or won't work out when it's done. But I have to go and evaluate it afterwards. That's the kind of automation of workflow. And you're like, no, we're closer to GitHub Copilot.
You're going to code the way you're going to code. You're going to do the thing. You're going to be in flow and we're just going to help you do it. Yeah, absolutely. I think there's like a, I mean, there's a spectrum, but I feel like there's almost like a philosophical stance you need to take when you're building an AI, which is, are you trying to do the stuff for the person? Are you trying to outsource it?
to the AI or are you trying to like give the person superpowers to do it better? I went to this talk by David Holtz, the founder of Midjourney, and he went down this rabbit hole where he got really obsessed with the like AI versus IA, like back in Marvin Minsky and Engelbart.
times. And it was kind of like, oh no, like the future computing, like artificial intelligence computers can do everything. And then you had like Engelbart being like, no, no, no, no, no. Like we built these tools so that humans can do 100X what they could do before. It can solve problems they could never solve before. And I think in some ways it's kind of silly today because it's like, I want AI to help me to augment my intelligence. But I do think you're building a fundamentally different product if you are trying to replace what someone's doing versus give some superpowers. And Granola is 100% in that category.
that category. The promise is you jot down what you think is important and then you have this little buddy sitting in the meeting over your shoulder who then fills in all the rest when it's needed. I don't think you deliver that product if you think of the world through that first lens. If you're trying to automate, you end up with a very bad experience. I feel like that promise is super high, but exceptionally hard to deliver. And at the end of the day, I don't think most humans want that, right? Like
we have a lot of insight bouncing around our heads and like we capture that in little snippets. And the idea that an AI is going to totally automate like the insight that you want to capture in that moment feels like a fallacy. Yeah, I think it's super easy to,
If you look at the world and there are all these meetings, what's the output of the meeting? It's this text document. These are notes, right? Okay. My job is to attach your data, record the meeting, and then to generate this text. If you look at the world that way, then you build a certain type of product. And I think it's like what humans do in the world is like,
so much more complex than you realize until you actually really dig down and think about it. Until you're trying to describe to an LLM what they're doing. Yeah, exactly. What you have to do. That's interesting. So
That reminds me, you wrote this blog post on Every, which is an awesome publication. And one of your big insights was that you can't write a rigid set of instructions to get these things to work. You have to treat the AI like a smart intern and give them context.
Maybe you want to elaborate on that. I thought that was a really interesting way to think about it. I think it basically comes down to this idea that the world is really, really complex. So if you have a world that's very complex and you could be put into a meeting of any arbitrary kind of situation and you need this product to write good notes.
it's almost like you have to view it like the values and goals that you have in your life like i think that's like the ultimate end state you know what i mean it's here's what i'm trying to achieve in my life and here's what i'm trying to achieve in my job and here's what i'm trying to achieve in this meeting with these people these relationships and it's completely impossible to list a bunch of instructions which is if this is true then and this is true then do this because those instructions will 100 conflict so it's
Be concise, but also remove all personal information or chit chat or banter from the meeting, except if it's like, well, you know what I mean? There's no way to write those things. Whereas I think the models are now smart enough that if you can try to give them context as you would an employee, then much, much, much more likely to talk about the stuff that you care about.
But yeah, it's funny. It's like, it's definitely, it's easy to say. It took us a while to get there. And the prompts are completely different. Like they're much shorter and they're much shorter and they're also much more specific to you, right? To try to understand like what goals you may have in that interaction. Would you ever expose those kinds of prompts or that kind of control up to the user? Because there's a conflict there between the thing that,
just works and feels right with, you know, as you use any system, you ride the curve. It's two years later. You can be an expert at this system, right? The first time I ever use a diffusion model and want to make a sign and chat GPT, I want
before it passes off the dolly, I want ChatsBT to augment my prompt like crazy and try and make it make sense so that the thing that comes back is kind of good. If I'm 10,000 images in the mid-journey, I don't want you to change a single letter because I have now built some sense of nuance of what I want and I know how to express it in a way that I wouldn't have a year earlier. Is that true here? Do you think you'd ever put this into expert mode? So,
Yes, we would. But my philosophy is the most important thing is that it just works out of the box for people. They don't have to think about it. So I think that's a non-negotiable. We have to do that. I think once it works out of the box and people are like, oh, hey, this is pretty good. Now I want to make it amazing for me in these specific instances. We definitely want to let people do that. We haven't figured out how to do that. And I think the thing that scares me there, or you guys probably have big thoughts on this, is...
It seems really easy to shoot yourself in the foot as a user, right? Or to end up in a dead-end kind of situation. A good example here is system prompts on GPT, right? I went in there, I wrote a system prompt. It made stuff better. Four months go by, I look at it, it's no longer correct. You know what I mean? It's like out of date. I never would have noticed it.
I think you have to be pretty smart about how these are living, breathing, evolving systems. Also, we want to be able to change the underlying model. That's the other thing, right? Which might break if we let the user have a whole lot of control, like the middleware. It's definitely something we want to do. It's just, it's something I think we haven't figured out exactly the right way to do it. Nabil and I have strong opinions as you insinuated on this, and I feel like we're usually like polar opposites. Yeah.
I put this in the bucket of what you just said, where it's like, my thought is that is something you want to get to, but it's five years away. And I hope it always stays five years away from you. I like, I understand why system prompts are a thing. I've never changed mine. I can only imagine how elaborate Nabil's is. I think that the best technology products that I've used feel invisible.
I think most people don't even consider that when you turn on a faucet, all of the complexity that's like literally buried in your wall and underneath the road to get clean water flow into your sink. And there's a reason why there's a whole bunch of reasons, but like that our taps are so simple and that we know it, like let's just turn them on and get the water flowing. The thing that I love about your product is that you have such an opinionated point of view where it does feel like
like you've made decisions for the end user to make it as simple as possible. And for the most part, it just works. And I'll tell you, Chris, like I have the utmost respect for you. And then every now and then I stumble onto that little overlay where it says like, here's your custom templates. And I'm like, oh no, Chris, just get rid of that. Why is it there? I want more custom templates, Fraser. What are you talking about?
So the model we're pursuing, who knows if it'll work out, is basically minimal UI, try to keep as little visible to the user by default. But if you go bigging, kind of like as much control or complexity as possible. And I think you kind of need both. What was interesting is that we built the first version of Gridola
I spent a year with people using it, right? Getting feedback. And after a while, the feedback was like, oh, the notes are all right. But really, I wish I could structure it. Like I wish I had templates. I wish I could structure it or whatnot. So we built the first version of templates, which is basically in the app. And what happened is people didn't really use it. Like it's more work. And I'm not saying like the templates aren't great. You know, we can make them better. People use them more. But the vast majority of people didn't use them. And then we're like, oh, okay.
actually what we should do is we should, now that we have these templates of what we think are really good notes for all these different scenarios, we should try to deliver those notes automatically for you in all those different meetings, right? And then what we did is we basically spent all that time and energy trying to get the out-of-the-box notes to be closer to if you had chosen the perfect template and why not. That's a case where I completely am in agreement with Nabil, where his advice that I've heard him say a number of times is like, do not listen to your most accurate,
power users in the earliest stages and deliver what they want. This is why I don't listen to Nabil. I think there's at least three things that you just like embedded in all of that, that it's worth pulling out for a second, Chris. So the first of all is, you know, the first time we met talking about this
company was a different product, even though it was an AI note-taking product. And you didn't mention earlier, but like Socratic was a smart company. I met you a long time ago. And a lot of this thing, the habit, the kind of rhythm of what you're saying, if people don't pick it up, like that I pick up is how does this, I think it's close to the last line of the essay that you wrote in Every, which is just like, how does this product make me feel when I use it?
And having the kind of like empathy to be trying to like pierce through whatever customer feedback you're getting or whatever they're saying and trying to listen to what they're saying about how they feel versus all the other nouns they happen to be using when they're talking about it and then not being satisfied. Those are the two traits I kind of assign to you in the way that you navigate product. In this particular situation, the thing that you need to have in order to become an expert, in order to want to
open up the templates tab and type in your own prompts. And you're like, I'm a person before granola, I was using a product called super powered, which was great. And one of the things I loved about it is that it gave you prompt control. And so I had spent months refining down prompts.
different prompts for the four or five different types of meetings that I had and getting them right to what I wanted and so on and so forth. And by the way, then readdressing them a month later and changing them again, Frazier, because these are fluid things that change all the time and all the rest of that stuff. That is not the way most people are going to behave.
But the thing that's true is anything that you do a million times, you start to build a nuanced sense of it. So the first time you do something, you just want a job done. But the act of becoming an expert in anything is basically four things. You need to be in a valid environment. In other words, you need to understand the rules and repeat it. It needs to be an ordered environment.
It needs to be a timely piece of feedback. And then you need to have deliberate practice doing that thing. That's kind of like a nature of any game you play regularly, you get better at and become an expert, right? It's valid, it's ordered, it's timely, and it's deliberate practice against that thing.
And I actually think that meetings fit almost all those categories. Like it's a thing you do regularly. You're going to look back at that feedback afterward, the notes you took, if you use the notes, ideally, and you want it to be better and when you were better. And that's going to, I think, when you get to year two or year three of somebody actively using granola, it's going to give people a really fine grained sense of,
where you're messing up or not. That eventually the like little prickly bits of granola that aren't quite right for my particular use case are going to crop up. And it's just about user control at that moment for the like 15% of people who care because it's the 15% of people that then augment that stuff. Like Frazier, I get it. You're not going to do it. The whole deal is like at Spark, I'm going to be the guy that goes does that.
And then I just need the power to go hand that back to Frazier and be like, yeah, I did all the work. I prepped all the stuff. Like, here's your new Spark templates. What I want to do, I actually think notes, I think like,
Notes are fine. I think notes are going to be less and less important as a format in the future. What I would like to give, what I'd like to build into, I want to build a million things into Gridball, but something I think would be cool would be an artifact generating, you know how there's the chat on the right-hand side and you can kind of chat with a meeting or whatever? It's like a cloud style, okay, write a follow-up email for this or generate an investment memo, whatever it is, whatever the next step is.
And there I would like to let Nabil write a super, super, super detailed, maybe multi-step prompter workflow and then share that with Spark. So you can basically be like, here is the Spark secret sauce, whatever. Like analysis, a company analysis, whatever. Because I think there's something pretty cool there. Because if you can get the one person who's willing to put in the work
And the other folks can benefit from it. And then there's like a social accountability. So that way the thing doesn't like wither and die if people are actually using it. They're like actively investing in it.
I'm such a fan. Like you're here because you have a note-taking app and you said, yeah, note-taking isn't really the thing. Here's what I want to do. And that's my rebuttal to you, Nabil, when I was making the face is I don't think you want to cater to the 15% power user on note-taking. Like I think that's a dwindling, small, uninteresting place to be. But like if you own the place where we're capturing our, I don't know, like pen and paper,
In a world where this technology of artificial intelligence and everything else exists, I think there's so much that you do with that. And the breadth and the depth of that value has got to be much larger. I don't disagree with that. It's just a horizontal product, right? And it's just not a vertical product. And not every startup is a horizontal product. And in a horizontal product world, the question is like, will the LLM know better about how
I take notes in my accounting practice in Bangladesh, or will Chris figure that out? Or is there an intermediary if somebody has been using and loving granola where one of those accountants in Bangladesh becomes an expert? And you see it in lots of horizontal products. They need to be easy enough and simple enough to get started.
And then you have this layer of experts that kind of like emerged bottom up, like, you know, like the economy of people making weird notion templates that you download. And that doesn't mean you make notion for the people that make notion templates. That would be the wrong thing. But it's part of a viable, important, healthy ecosystem.
I get it. And Chris's word was atrocious when he was talking about the templates. I actually think it's thoughtful how you've integrated them. And like they do give Nabil the power and I don't have to deal like it's just like out of sight, out of mind for me. And he's he's able to like get it to the point where he wants. And that's probably the right split is I don't ever come across it.
And then Nabil has it for when he needs it. Yeah, we just want more AI products to do that. Can you talk to all of them, Chris? Can you talk to all the other people? I mean, I don't know if it's a winning strategy. I mean, like, we'll see how it all plays out.
I do think that people like granola like it a lot. I think that, you know, I'm comfortable saying. You have taste and you've made then a lot of opinionated decisions, which is like the combination of those two things generally works out pretty well. One big decision that you've made is you don't show the transcript, which the first couple of times that I used the product, I thought was great.
Weird. It was weird. Weird is maybe the word. You can't see it. You know you can't see it, right? Like most people, a lot of people don't discover this. Like you can click on the little dancing bars and then you can look at the transcript like real time during the meeting. Can you see it post fact though? You can see it post. It's just hidden away. Like I think the reality is like transcripts are really long and unwieldy and like just a pretty crappy format for digressing information. So it's there. It's hidden away on purpose. But it's...
What people do want to do sometimes is they go back and they're like, oh, damn, wait, really? They said that? What was this point? And at that point, you want to be able to zoom in and be like, did the transcription mess up? And completely misinterpret what was said? Or is the LLN messing up? Or is this real? Or what was the context around that? I think it's super important to be able to do that. But full-on transcripts, not super useful. I have a question on process.
As I've listened to you on this call. So you have clearly this big vision, right? You're like, oh, no, forget the note taking thing. This is just the act one to get us over here. You have all sorts of like short term requests probably flowing in from end users. You like how are you prioritizing where you're spending your time and what's getting done? What will get shipped in the next three months versus what's pushed out for a year from now? Right.
Oh man, it's tough. It's funny, when we built the first version of Granola, there's basically no post-meeting anything in Granola, right? It just generates the notes and there's the chat. So you can use the chat to generate action if you want. It's not great yet, but there's nothing really useful post-meeting and
I was like, okay, we're going to launch that and then we're going to get to the post-meeting stuff right away. And it's been, I don't know how many months since we've launched and we haven't touched that. Just because there's so much just to do on the basics and make it really good. I think it's tough. I'm curious what y'all think. On one hand, what we have, the people who like it really like it. It's growing. It hits a nerve.
On the other hand, we're in the advent of a latest generation of AI means all this new stuff is now possible and it's going to happen and it's going to happen quickly. Right. And like granola is like the 3% of what I hope it will be, you know, not that long from now.
And I think, I don't know, I feel like traditional product wisdom would be like, you have, you have product market fit, like just, just do that. Don't mess it. You know, like you try to build more products. There's like almost as much of a chance that you mess up the thing that you have than if you, that you make it better, you know, like, like more features oftentimes not going to make it better.
But in this space, man, like, I don't know, people ask me about competition and like who I worry about. Like I worry about a startup launching tomorrow more than I worry about the big companies personally, because I think like the stuff's evolving so quickly. And I think...
Like AI native products built on like these new building blocks will like look and feel very different. So I think it's tough in terms of what we prioritize. When we launched, we had four people on the team, which was like not enough given what happened post-launch in terms of the growth. So I think the first six months after launch, we're digging ourselves out of
product engine company debt in terms of just being super understaffed for the volume of what was happening. And now I think we're going to try to split it kind of 50-50, right? Like we have a lot of companies coming to us being like, hey,
We have a bunch of folks using granola. Like we'd like to, the whole company to use it or, you know, and there's a lot we need to do there to just kind of like mature and become more, more sophisticated at the same time. I feel like the product we have today, if it doesn't evolve quickly, it'll feel outdated and obsolete. Like, I don't know, in 12 months, I think both of those things are true. If you were starting granola from scratch right now, given just internalize for a second, everything that's going on,
What would you do differently? What would it look like if you were trying to kill you? It's a great question. I feel like I should force myself. We should ask ourselves this question every month. Okay, here's my answer. Like, okay, what are the big things that have changed? I'd say since we launched Crinoline, right? I see two directions. One is...
Models have just gotten way, way, way smarter if you're looking at a one-level intelligence. The second one is multimodal. At first, it's like images, but you can imagine streaming real-time video and audio into the model. The multimodal one is interesting. I thought it would have a bigger impact or change. You could stream the entire meeting into an LLM, and we haven't done that yet. I think there's probably a product to be done there.
It is interesting, though, because at the end of the day, like a text editor as the interface between the human and the AI is a very familiar, very powerful, kind of like high precision interface. So I think that even though I think multimodal is coming, things might still look like text editors for a very long time. I think if I were trying to kill us,
I would maybe try to leapfrog the notes part a little bit. Like notes are important, but basically it's like, what's the post-meeting, post-notes, artifact stuff? I would maybe focus on that. Like, why did you have the meeting? What was the point of the meeting? What's the outcomes that you want? And just try to do that really, really well. Because I don't think anyone's doing that. I think there's a lot of unexplored realizations
really powerful stuff to figure out there. The approach we took is like, we have to get your trust to get in the meeting, right? Like we need to be useful off the get-go so that then we can do X, Y, Z with all this context and like make your life easier to do this work. But maybe there's an alternate. Maybe you can just leapfrog that. I don't know. Fraser, what else has changed? What are the kind of like tools in the tool chest we maybe didn't have a couple of years ago? And also a little bit looking forward, if you're planning, you're planning for the next year. Where my mind went last,
As a user of granola wasn't toward like the technology that may be coming. It was more around how I use it and where I use it and also where I don't use it. There's a big part of my life where I'm still turning to the pad and the paper and the pen.
And it almost feels like you show up and you're like, nope, you can't use it right now. Sorry, put down the pad and paper and the pen with this tool and go use, Nabil, cover your ears, go use Apple Notes because like this isn't a meeting and you need to like, this is a product for meetings and we've optimized for meetings and any other time that you want the paper and pen metaphor, I'm there to tell you no. I like that. If it's a tool for thought, then what are the other places you're thinking? That's it.
That's it. Like I still have, it seems weird that to prepare for this call, I have an Apple note document. What a weird word. Open, filled with my ramblings to show up with.
And that's a bifurcation of a use case that I feel like you want to own. You can ask a totally different question. Please. Yeah, it's tough. It's a tough space to look five years from now and be like, who are the winners going to be? What are their characteristics? I guess a question that we got a lot, we still get, we've always gotten less now for some reason, but we got a lot earlier and I still think about is, what do you think is going to, in the future, what do you think is going to be
owned by a general assistant versus a specific solution, a vertical AI-powered tool? Does Cat, PPT, or Claude, do they... You'll have one for your whole life, your personal life, your work life. Do you have 50? Is that even the wrong mental model? Where's that headed?
Well, Nabil's been on this journey with me over the past couple of months where I came back and I told him that even I was guilty of underestimating how broad in general the general products were going to be. And, you know, where we came out in past conversations was like things where there's high utility but low frequency of use, right?
these broad horizontal layers are awesome. I can send it my health information the once a year or every other quarter when I have it, and it's just fine. It's just fine. And so then the question is, I think there's going to be a very small number of them. I think it would be weird if I have one for both my work life and my home life.
That seems strange. I don't want the skivvy people in privacy and security at work to know my personal assistant information. I think that we've seen enough history that people want to separate those worlds. Then where do we go? Like Nabil and I landed on the idea that if there's broadness in work, then you could see a horizontal work assistant carve out some space. I think we've probably had at least
three or four hours of the last year of us on this podcast, Chris, having some version of that conversation. I think literally maybe our second episode was like, how do you not get hit by the tidal wave of chat GPT? Like where's safe ground? Where are you? How do you build on the S-curve of AI? It's probably a very frequent theme that probably will be interesting to listen back to the way we're talking about it now in two years, frankly, and in many ways, we're probably quite naive about it and how we'll talk about it.
I tend to try and pick these as axes. And so I think frequency is an axis that Fraser just brought up that I think is worth thinking about. Who wins is going to be on this graph if I only do it once a year. Then, yeah, I probably use a generalized tool. The more I move towards every day, the more I'm likely to use a specialized tool. That's obvious. That's clear. I think the other one is that framework I've been trying to think through lately is
Both you and me, Chris, as founders, started in the Web 2.0 era-ish stuff and the wisdom of crowds era.
And I think a lot of what this is distilling down to, if we just get rid of AI or all these other things, that this latest instantiation is really like the wisdom of experts era. We are taking PhD level knowledge about a thing, encapsulating it in a model, and then letting you as a consumer access it. That's kind of what's happening. So why is CodeGen good? CodeGen is good because
in Claude, not because we took all of the cat poems on the internet and we ingested it into an LLM and made it smart. It's because we actually took really well-written code and we ingested that in the LLM. We took experts in that
expertise. And we are now trying to distribute that expertise to everybody. It's not always at a PhD level. It's not writing at a PhD level yet, but you can imagine that this is what the large LLM companies are doing now. They're paying PhDs by the hour to like fill out math equations and put them into the LLM over and over and over again, again, to try and build that expertise. And so I think the other access here is
Is the thing that you are doing every day, if we take the frequency graph, is there somebody else in the world who's incredibly good at doing that better than you? And is that pattern of behavior instantiational in the model? And if so, then it will probably bleed itself into some type of user interface for some way for you to use it.
That's the other axis I think about. And I don't know what that means for where the way granola like navigates itself, but I think it's true in every category. There is probably somebody who is
better, who has the same thoughts as me, but is coming out of a meeting, but has a 10 times better efficacy at like recording the nugget of the idea that they just came up with, then like as a habit taking, as you get back to taking action on that idea and executing on it and making it happen. And what does that mean for who I should speak to or how I should come back to it or what rabbit hole of research I should go down to like all that stuff. And that
Like capturing that wisdom of an expert and trying to make the model nudge me to be just a little bit better in that direction. That's going to feel like the superpower. I've been listening to Nabil and I have a slightly refined take on what I shared earlier, and that is.
I feel pretty confident that there will always be a space for you adjacent to those broad assistants, Claude and ChatGPT. And here's the reason why. I think maybe last time Nabil and I made a recording, Nabil laid out the framework that there's some products where if you look back over the technology arc, new versions have arrived. Like IRC became Slack and Discord. And there's a whole bunch of products that have that arc.
They're durable. Like your interface goes back to rock on wall, you know, right? People scribbling stuff. You held up a pen and a paper. I think the fundamental interaction of taking notes is dramatically different from the fundamental interaction with a chat assistant, right? And then I also think that there's always going to be a place for somebody to...
Take what's here or here in this conversation and bring it into a world that persists. Related but also unrelated.
Where are we going to end up with wearables? AI pens? Wait a second. You asked us a great question. And you told us that you've been thinking about that for months because you've been asked. And now you get asked about it less and less. But you must have a great thought. So what's your opinion? I think that...
You move the time horizon too far down the line, it's like, who the hell knows? We might be fighting with sticks and rocks. The next world we fought by sticks and rocks. It's hard to imagine what that looks like. But I think for the foreseeable, maybe a different axis is this idea of a power tool, right? Like a tool for an expert in an expert situation. There's iMovie, but there's always going to be Final Cut or Avid or whatever there was before. And this idea of if...
it is important in your job that you are extremely, extremely good and efficient at some task, then your tooling will pop up to support that. And I think that there's an axis there where the best XYZ people in the world at something will use specialized tools. And I think for us, there's a question of like, are meetings or knowledge work one of those? Or is it like in the more general category?
I think there's enough, at least for the next couple of years, specific to the workflows around meetings. The biggest thing you don't realize about meetings is like,
people are back to back. They overrun, which means that they have zero time to do anything with the notes or anything between meetings. And that the hyper efficiency of getting from one context to the next context is actually incredibly important, right? Like dumb things like opening the right zoom on the right place on your screen and having a place where immediately you can just start like chatting or whatever, like those things or like
I should be able to take my headphones off halfway through and switch to a different like, oh, this zoom link didn't work. So I'm going to go to a different zoom. All these things that happen in like the real world that if you don't, if you haven't been building this space for the last like two years, you wouldn't necessarily appreciate. But like all these paper cuts get in the way. And I think there's enough about that.
that specialization really matters. Again, in the short term, like if you, if you zoom out, people are like, you know, your eyes don't even exist. It's all be dynamically generated in the finally, maybe, I don't know. I think that's kind of farfetched personally, but I think also like as a human, there's something nice about predictability and knowing that this button is going to be here and not, you know, like a redesign on the fly for me. So I think.
Again, for a little while, the specificity of the use case and all the like thorns around that or paper cuts or whatever, you might get nicked. There's a lot of value there. For sure. There's no way the UI goes away. We're like we're in the MS-DOS era of AI. And there's a reason. Yeah. All of the proponents of the UI-less world have never...
Try to design a new interface for somebody to explain. Sometimes you need to know what the two options that are available are. Like, I'm going to speak to my speaker. Try figuring out the available apps on your Alexa without using your phone. Like, it's insane. The reason you want to read a menu at a restaurant instead of having the restaurant read a menu to you and listen for the next 15 minutes. The thing that's less clear to me is like, so right now, Granola is very much like one meeting.
at a time. That's how you interact with it. And we want to make it feel so it's much more like you interact with the set of meetings. Down the line, you jump into a meeting, you'd like Granola to prep you like the best chief of staff in the world would, right? You get like a dossier. You're like, here's what you need to know. And here's what's really important in this meeting. And you can see how like, oh man, if it doesn't have my emails, like,
it's going to miss out. There's something really important in that email right before this meeting. I need to know. And then you're like, okay, well, so Granola should have access to my email. But what else should I have access to? And then you're like, okay, am I granting Granola access to my Slack and my email and all these different services? Am I doing that for all these different
Completely vertical AI agents. That's the part where it's like, is that getting replicated over and over or not? Or like, do we have to win? Like, do we have to be the one like knowledge work core, like AI assistant or can there be five and that's actually preferable and better for the user? Like that's the stuff that's less clear to me.
My bar to try a new product is exceptionally low. Like I said, great joy to go and tinker and explore. And then as Nabil knows, my bar to stick with the product is exceptionally high. Maybe like the trite way of answering your question is like the idea that I use Chachi PT and Claude all day, every day. And I use granola all day, every day is the starkest evidence for me that there needs to be two different products here. Yeah. Like I just, I would have no patience for it. Like I'm not looking to add complexity to my life just to have new products come into it.
I was reflecting on Chris, your earlier comment about what is different about the world of AI and what's going to change over time. The other one is at what point of context the really good chief of staff is assertive and kind of knows your weaknesses. And they're not just giving you an information dump.
they have judgment. And so I think when do you cross over from utility to something that has judgment is also a thing that I think AI products have taken a stab at a couple of times, often in like the family therapy kind of situations and so forth. And there's constrained areas, you see a little bits of it.
But I would suspect with enough context, you should be saying, you're about to go into a meeting. By the way, you tend to ramble on when you talk about the subject, like keep it short, dude. There's that measure of like, I fed in, my father had been working on a manuscript for a while. And I fed in that manuscript to Gemini and to ChachiBT. And I asked the normal sets of questions about like, give me feedback on the whole thing. And, you know, it's fine. It's okay. I fed it into Claude.
And then inside of that artifact, I fed in the authors that he really likes and enjoys versus just saying, you know, the right, like Paul Graham stuff. Like I actually fed in some manuscripts from authors that he really enjoys and then had it give him real feedback. And then if it wasn't like incredible, and that's a mixture of the context setting of other writings. It's also the mixture of just Claude being better at this kind of stuff, but like,
It was really, really good. And it was a moment over Christmas break that I was like, why haven't more people realized that this is actually doable today in a way that it was? It still can't write amazingly well, but it can pass judgment pretty well with the right context. And it doesn't do that in many products.
There was an example on Twitter where Tim Altman released a statement and like underneath someone had asked him like, provide a critique from a PR perspective and break down the statement into code. And it was pretty eye-opening. It felt like you were talking to like, you could be in a PR like war room and the tactics they might be using and breaking down, which, you know, I'm not from that world. So at least for me, I'm like, oh, wow, here are these methodologies I hadn't heard of that may or may not have been used, but look like they were applied in this statement.
Yeah, like at some point I want granola. It's going to know my patterns of speech over the course of years. What are the things I'm talking about now that I wasn't talking about two years ago? What questions should I ask going into this meeting that I wouldn't think about? That prep can be deep and nuanced and intimate in a way that I think is almost impossible if you don't have the context that you're getting. You're right. And maybe you need emails and Slack and everything else as well. But that feels like a magical next step.
I'm looking forward to it. Earlier, I said that, you know, that's a feature that is five years out and I hope it always remains five years out. That's a feature that's like a year out and I hope you ship it in a year because that would be awesome. That is not the one for tomorrow, but that would be so great. You had a question about wearables you wanted to ask? Did you want to jump into that? Yeah, where are we going? I feel like every couple of weeks there's a new AI wearable and like part of that feels like it's on some degree inevitable, but also...
Like even for someone at the forefront, I feel like a lot of this stuff still feels kind of dystopian today. And like, you know, I keep questioning myself, am I getting old? Or is it like, oh, it's not quite the time or the form factor? You know, I'm curious where you land because I mean, you're basically going to predict the future, right? And like the timescale of that future. Yeah.
No, no, no. Our job is to meet people who are predicting the future and then just try to adjudicate on whether or not they're... Which ones are... Yeah, okay. Yeah, which ones... Well, like, which side of crazy are they on? Our job is to listen. I mean, I know we're talking a lot on this podcast, so we're all just sharing because I think of this as an active conversation more than a pontification. That's what I hope us chatting here is. It's also just an excuse to hang out with Fraser and other good people like you. I have this wearable. Oh, which one's that? Okay.
This is Plaid, P-L-A-U-D. I have this wearable. This is also Plaid. This is their non-pendant version. And if I walked around the corner and spent a minute, I could just drag out another six or seven other wearable devices. It seems utterly...
Inevitable. I think there's a really simple question of input and output modality. There's a really simple question of battery life. What's drive it? I don't want my phone on all the time because if we get over the privacy concerns, all that stuff, like I worry about my phone battery.
Like if I want this thing to listen to everything that's going on in my life so it has more context, then I need to, for the very simple world of battery, I do expect it to be a different product. So yeah, I'm very bullish on wearables. I think the value back has not really been delivered. So the reason you feel...
iffy is that you haven't really gotten real value out of using one of these yet. And if you do, then I think you're on the other side of it. If there's a job to be done and it's doing the job, then you're happy. I had a really interesting, fun, long conversation with my son a month ago about like what he wants to do in the world and the things he has. And I happened to have the plaid on my jacket
And we were, you know, there was no taking notes. There was no pulling out my phone, being able to tap this really quickly and record that. I would love to have that conversation in 10 years. Right. What a wonderful and amazing artifact that will feel like photography. Right. To me, that just ages well over time and gains value over time versus losing value over time. I don't know.
I mean, I'm not adding anything to that beautiful soliloquy. Like that's it. That's it. Like,
The capture piece is going to be like photographs. That's a great framing. There will be cherished things, just like we use photos today. Like we use it to communicate. We use it to capture and remember. We use it for nostalgia like that. Yeah. We also use it to take a picture of the parking sign for utility. I guess if you use the photograph, I mean, it's like apples and oranges, but like, I don't disagree with anything you said. I also don't.
had a clear, very specific vision of how this is going to go down. So this is more in the sense of exploration and discussion. When Snap launched and your photos disappeared, that was a hard thing for people to wrap their head around as being a positive. And it absolutely changed the way people interacted with it and what they shared and how they felt. And I think there's something similar here.
Or there's some parallel, there's some point that we should bear in mind. And maybe we normalize to it. Maybe you and your son have the exact same conversation if you know your life 24-7. But maybe not. The human organism and society reacts. It's like antibodies to certain things. But I think what you just described is the importance of great...
product and taste, right? We adapted, but snap, like unearthed something beautiful. It doesn't mean that all of the other ways with which we've used cameras have disappeared. It also doesn't mean that we like totally have changed our behavior. We've moved and we've lurched forward in lockstep and the technology is adapted. We've adapted. The products have been crafted to meet us there. I think the same thing is going to be here. I have no doubt about that.
I think that's good pushback, Chris. We should be careful to say that we boiled down wearables for the purpose of this short discussion to recording audio when, of course, wearables have lots of other modalities and things that they do. But to pull on that thread for a second, there's this guy, Ivan Ventrov, who wrote an essay just a couple weeks ago. Chris, if you haven't read it, you should. I shared it in Slack at Spark yesterday. It's called Shallow Feedback Hollows You Out.
And it is ostensibly about the feedback loop that you just talked about, but applied to kind of writing generally. Why does somebody who is a really great and an interesting thinker, why does it feel like in this world now, they pop out into the world, they have a brilliant insight. And then if you just watch them, and his phrasing basically is like,
thinkers co-evolve with their audiences. And so because they are now in this very tight feedback group with their audiences, they sensibly become crazy people over the course of the next decade that like their audience just gives shallow feedback because the average of a large audience isn't thinking that deeply about the thing. And then because they're co-evolving with their audience, that original thinker becomes shallow over time instead of becoming deeper over time.
And I think that I want something to record everything that I say. I don't know that I want access to the transcript or the audio of all of that thing. I want something to record everything that I say so that an LLM can get smarter about me so that I can get smarter about me. I want it to be my augmented memory. That's very different from, I want it to be used in a social context.
Like, I think we should be very careful about the things that we introduce to others in a social context. And so I agree. We are incredibly social creatures, man. There's no way we just, if the recorder is on all the time, that we just act exactly the same way. And I think what's tricky there, just imagine there's a device and it's very clear that the audio recording transcript is not accessible, but there's all this like upside value, right? And I think everyone could feel good about that.
There'll be other devices that look similar that don't do that. You know what I mean? And I think that's where the social dynamic is like, oh, it's just like the compressed version of the knowledge or the content or queryable or what have you. Like what's the social contract around that? Yeah.
I think the social contract's important. I think, is this thing going to be used or can it be used in what context is really important? Like, I'm the one who had, Chris, I had a visceral reaction. I emailed you after trying the beta version of the mobile app for Granola that like just watching the little transcript thing bounce up and down and thinking about it, recording words. Like, I don't like it. Like, don't do that. Just let me open it up and start my meeting. Yeah.
Funny story about the Granola mobile app. It's not a public one. We're working on one. The engineer who's working on it, his name's Jonathan, he got engaged over the holidays and he turned the app on on his phone and in his pocket while he proposed. And the notes are great. Like they're really funny, actually. Like I'm going to ask him if we can tweet it, but it's like, it's actually a really cool record to have of that moment in like a weird way. Anyway, I don't know.
It feels like neither here nor there, but there is, they kind of go back to that conversation or something. There's something kind of magical about being able to go back to certain moments. It's not a here nor there. That's totally apropos to the previous comment, right? That's a magical moment captured for them now that wasn't previously done. Yeah. I wonder if there's a secondary record button. There's the like, it's always thinking and it's always gaining context. Yeah.
And then maybe what you really need is another button, which is actually a record button, which is like, okay, this time, this moment, it's like TiVo. I actually want the audio and the transcript of this part. I actually agree on that. I actually think we are missing a term or a verb for it. I can get my head around that.
Yeah, for me, it's very related to the if an LLM reads a document, is that a copy of a document or is it just learning from the document? With all this, I just think there are new norms that need to be established. And I think
We're still in the things like I'm taking form, like needs to take form a bit more. And then he needs to be labeled. And then there needs to be norms that evolve around those labels. And right now it's just like this. It'll be interesting, though. It'll definitely be interesting. I do think until we have real utility that comes out of these, I think it'll be moot because no one will care. The moment they become really useful. I think that's when it gets interesting. That's like then we're off to the races.
So let's use that as a kind of last standing off point because I know we're over time. We've talked about what we feel is good and magical about granola, that it augments a human behavior. We talked about the fact that that behavior being frequent matters a lot. And it matters a lot as you were thinking through this process. Is there something else? This is for you, Fraser, too, as well. Is there something else in your guys' lives that you wish was granolized?
And is there something else, a pattern of behavior that you guys do in the rest of your lives on a regular basis that you just wish you had augmented? You just wish you had, you know, not a chatbot co-pilot, but somebody who kept you in flow? Anything for you two? Well, what came to my mind, and I actually don't know if this is a good example, but I wish there was something that would be better, is my calendar and scheduling. And the reason I say that is because on one hand,
you could imagine a really smart agent
doing a lot for you, right? Like bring best practices in and moving stuff, like defragging or whatever people who are world-class at this do. And, you know, looking down the line and realizing, hey, there's like a, you're not going to have enough time a couple of weeks from now. So whatever, you're not reaching stuff. But conversely, it'd be incredibly frustrating if you couldn't go in there and like change every little detail about any meeting yourself and you need to have that control. So I think there's this a little bit like with Granola, you're like, you have to
It's the ability to lean in as much or lean back as much as you want. I think you'd need that same ability with a calendar app. I think there might be something there. I'll give mine. And it might be related to granola. It might not. It's going to sound really basic because it's another one of these IRC becomes Discord and it's a forever thing. I think project management.
like the Asanas of the world, there is always some version of I'm trying to break down tasks. I think it's different than personal to-dos in a work context. Those things tend to have dependencies. I tend to not be able to think around every corner. It's always a question of wording. You know, I used to do a thing that I didn't do anymore, but
Right after we would invest with a company, I'd sit down with them and we'd go through their annual goals. And I do a lot of rewriting of the language with the founders to make it more explicit. You know, there's like, there's like 20 rules, 30 rules of best practices for doing good structured project management inside of an org, any group of people trying to get anything done. And I think it's a world where I don't think you need me to type in what the broad
project title is, and then you filled out a thousand little bullet points with a Gantt chart. It's more like, let me organically make something. I can't even imagine what the UI would be right now, but like, let me organically make something and then start to augment that behavior, build out that behavior, help me think around dependencies and corners as I'm working on it while keeping me in flow. That's a product that should...
That should exist. That'd be great. Yeah. Yeah, I agree. Fraser, you got anything? Earlier, you said that we don't predict the future. We listen. And then in the case where I'm going to listen, nothing's coming to mind. That's great. Well, if anything comes to mind later, you can always tweet it out or email me. I'm around.
Awesome. Chris, thanks so much for joining us today. Thank you so much. Yeah, we'll listen back and think about how stupid all the stuff we just said was. But thank you so much. This was fun. Always done with humility. All of this, always done with humility.