This year's presenting sponsor for Invest Like the Best is Ramp. Ramp has built a command and control system for companies' finances. You can issue cards, manage approvals, make vendor payments of all kinds, and even automate closing your books all in one place.
We did an incredibly deep dive on the company and its product as part of this new partnership. And what we heard and saw in customer surveys over and over again was that Ramp is the best product by far. We've been users ourselves since I started my business, since long before I was able to spend so much time with the founders of Ramp and their team. Over the holiday, I was with Ramp's founders. Those that listen know that I believe that the best companies are reflections of the people that started them and run them.
I've always loved the idea that Apple was really just Steve Jobs with 10,000 lives. Having gotten to know Ramp's founders well, I can tell you that they are absolutely maniacal about their mission to save people time. As far as I can tell, they do not stop working or thinking about the product and how to make it better. I'm sure they're proud of what they've built, but all I ever hear when I'm with them is them talk about what they can do to improve and expand what Ramp does for its customers.
I used to joke that this podcast should be called, This is Who You Are Up Against. I often had that same thought when I'm with Ramp's founders, Kareem and Eric. I would not want to compete with these guys. I wish all the products I used had a team as hell-bent on making the product better in every conceivable way. I could list everything Ramp does here, but the list would be stale in a week. I highly recommend you just start using it to run your business's finances today.
This year, I'll share a bunch of things I'm learning from these founders and this company, and I think it'll make you realize why we are so excited to have this partnership with them and why we run our business on Ramp. To get started, go to ramp.com.
If you're attending the InvestOps conference in Orlando this year, I'll be speaking at Ridgeline's private breakfast event on March 11th. Ridgeline gets me so excited because every investment professional knows this core challenge. You love the core work of investing, but operational complexities eat up valuable time and energy. That's where Ridgeline comes in.
Ridgeline is an all-in-one operating system designed specifically for investment managers, and their momentum has been incredible. With about $350 billion now committed to the platform and a 60% increase in customers since October, firms are flocking to Ridgeline for good reason.
They've been leading the investment management tech industry in AI for over a year with 100% of their users opting into their AI capabilities, putting them light years ahead of other vendors thanks to their single source of data. You don't have to put up with juggling multiple legacy systems and spending endless quarter ends compiling reports. Ridgeline has created a comprehensive cloud platform that handles everything in real time, from trading and portfolio management to compliance and client reporting. It's worth reaching out to Ridgeline to see what the experience can be like with a single platform.
Visit RidgelineApps.com to schedule a demo.
As an investor, staying ahead of the game means having the right tools, and I want to share one that's become indispensable in my team's own research, AlphaSense. It's the market intelligence platform trusted by 75% of the world's top hedge funds and 85% of the S&P 100 to make smarter, faster investment decisions. What sets AlphaSense apart is not just its AI-driven access to over 400 million premium sources like company filings, broker research, news, and trade journals, but also its unmatched private market insights.
With their recent acquisition of Tegas, AlphaSense now holds the world's premier library of over 150,000 proprietary expert transcripts from 24,000 public and private companies. Here's the kicker. 75% of all private market expert transcripts are on AlphaSense, and 50% of VC firms on the Midas list conduct their expert calls through the platform. That's the kind of insight that helps you uncover opportunities, navigate complexity, and make high conviction decisions with speed and confidence.
Ready to see what they can do for your investment research? Visit alphasense.com slash invest to get started. Trust me, it's a tool you won't want to work without.
Hello and welcome, everyone. I'm Patrick O'Shaughnessy, and this is Invest Like the Best. This show is an open-ended exploration of markets, ideas, stories, and strategies that will help you better invest both your time and your money. If you enjoy these conversations and want to go deeper, check out Colossus Review, our quarterly publication with in-depth profiles of the people shaping business and investing. You can find Colossus Review along with all of our podcasts at joincolossus.com.
Patrick O'Shaughnessy is the CEO of Positive Sum. All opinions expressed by Patrick and podcast guests are solely their own opinions and do not reflect the opinion of Positive Sum. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of Positive Sum may maintain positions in the securities discussed in this podcast.
To learn more, visit psum.vc. My guest today is Chris Pedrigal. Chris is the founder and CEO of Granola, an AI-powered notepad that transcribes your meetings and enhances your meeting notes. Chris shares fascinating insights on how humans have historically developed tools to extend our cognitive capabilities from writing and mathematical notation to data visualization.
and how AI represents the next frontier in this evolution. We explore competitive dynamics between model providers and application builders, and Chris shares his vision for AI tools that make us better rather than replacing humans altogether. Our conversation covers the product philosophy behind Granola, the challenges of building in this fast-moving AI space, and how small teams are creating outsized impact in this new paradigm.
Please enjoy my conversation with Chris Pedrigal. Chris, I thought a fun place to begin our conversation today is with some of your ideas around the value of tools for thought that technology has given humans over the centuries. Obviously, you're building one of those tools. Now we'll get into that in great detail.
But the first time we chatted, I was so intrigued by the way that you approached this and thought about this unlock of value for people. And I think you used the XY plot as like a good example of one of these tools for thought. Maybe you can just riff for a while on this line of thinking and why you're so interested in it. I love this topic.
I think fundamentally humans are tool makers. It's one of the things that like just sets us apart from other animals. If you look back at the history, there have been these inventions, tools that were invented that just enabled humans to do so much more. And the interesting thing about that is some of those are really just explicitly tools for thinking. Examples there could be writing is a great example.
different mathematical notation with Roman numerals. You can only do math up to a certain number in your head without an abacus. Whereas with the notation we use now, you can do long division of massive numbers and that's fine. My favorite example is this idea of being able to visualize data. What you brought up is this guy called
I think his name was William Playfair. It was something like 200 years ago. He was the first person to graph data visually so you could use your eyes. Humans have evolved to bring in images and make sense of images really quickly. So the idea of mapping numbers to the visual plane and being able to intuitively feel, oh, that graph's going up or down or it's going up much faster than it was before is just crazy. That 200 years before I was born, no one had done that. All of this is to say that
I'm sure we'll get into this into more detail. You have mathematical notation or writing, data visualization, then there's the computer. And I think with AI, we're just entering like a new realm where the tools for thought will just be exponentially more powerful and more useful. God knows what that's going to look like in 10, 20 years. I guarantee it'll look nothing like it looks like today.
Maybe just talk about that transition. Mention what you're building at a high level first, and then we'll go into it in much more detail later. But as we transition into understanding what new tools are possible built on top of this new technology, how are you personally approaching that? What were the original things that you thought of when you saw some of these LLMs walk us through this phase change?
One observation I think that's interesting about these tools for thought is that oftentimes what these tools do is they let you externalize things that you have to hold in your head. One of the most ubiquitous tools for thought today is a notepad and a pencil. And when you use a notepad and you write things down, it just means you don't have to hold everything in your head.
and you can look at these ideas or look at these notes. And to use an analogy, it's a bit like extending the RAM. The amount of RAM that we have in our heads are hard-coded by physical limitations, and these tools basically give you, oh, they give you more RAM, they give you more memory. I think what's incredible about LLM's
The real unlock here is that you can use LLMs to bring extremely relevant context to the person in the moment they need it. And that context can be dynamically generated to map the needs of that moment. So I think it is, if being able to write your ideas on a piece of paper and a notepad makes you much more capable in a meeting, if you're talking to someone, imagine if a computer can bring in all the relevant context to
to make you brilliant in that moment. So you have that at your fingertips because LLMs can rewrite content on the fly and pull that stuff in for you. I think that'll be just an incredible, incredible unlock for people. How does that manifest? Is it that everything in my life, like everything I've read and conversations that I have, everything is ultimately stored in
And then there's some mechanism for me feeding my current context back to some system and it serves up ideas or brainstorm concepts. Make this a little bit more real in terms of your vision. Before you ask me to talk about what we're building at Granolos, I can talk about that. I think in the realm of AI, it's easy to talk about the next two steps and it's really, really hard to talk about what the world's going to look like 10 steps down the line. AI really simply is like a digital notepad.
So think of it like Apple Notes on your computer. It's an app on your computer. You can write notes. The main difference about it is that it's also listening to what's being talked about. So if you use it in the meeting, you can jot down whatever notes, whatever thoughts you have. Granoa is listening to the conversation. It's transcribing that conversation in real time. And then when the meeting ends,
It'll take whatever notes you've written and it'll flesh them out to make them great. So you no longer have to write down everything that's important. You can really focus on what are the really key insights or the thoughts that you had in that meeting, like the key judgment that you bring to that situation. And you can kind of outsource all the busy work, the rote work of writing down information or facts.
to the AI. What's so powerful about this, and we still don't fully understand I think how it's going to change the way people work. I know it's going to change the way people work because I work differently and Granola users work differently and we're maybe five percent down the path to our vision, is that when you look back at your notes you have that full context of the meeting. So you can then go and chat with Granola and ask it questions about what happened or pull out themes
Right now, we have a feature internally we haven't launched publicly where you can look at all your meetings with a certain person or all your meetings on a specific topic and pull out themes across those meetings. And it just makes this context that otherwise is lost or forgotten. You wrote it down somewhere, but you don't know where that notebook is. You don't look it up when you're making a decision that's relevant. And it just makes it immediately accessible and useful. Maybe.
Maybe talk about how you work differently than others, having been the person most exposed to granola. Like what are the actual behavior changes so far? And then I want to ask about the 5% to 100%. But starting with just the 5% penetration, how has it most tangibly caused you to behave or work differently?
This is something that I think will be widespread. Knowledge workers, folks like you and me, we're constantly going to be thinking about what's the context I need right now to be the smartest I can be. For folks listening, when you're using something like ChatGPT or any LLM, there's this idea of a context window. You can put X amount of information into that context window. And it's basically like giving you like, here's the situation. Here's the stuff you need to know to be able to think about it.
That way of thinking is also going to apply to us, to people. We're going to be thinking about that all the time. A concrete example, I need to write a blog post. Before, I would have just sat down with a notebook and I would have scribbled down a bunch of ideas and then I would have tried to type it up.
What I did now was I first talked to a few different people who had good advice on this blog post and I used Granola. So now I have notes and the full transcript from those conversations. I then used the Granola app and just walked around and spoke out loud about different ideas. So I did a brainstorm where I was just recording it.
And then I put all of that in a folder inside of Granola. And I started chatting with the AI, asking it to pull out themes or suggested formats. And at the end of the day, I'm going to write the blog post. But that process was such an incredible way of synthesizing all this advice that I guarantee I would have dropped different parts.
along the way. That's one example. I think another example is, and this is something we've observed with granola users, is just the way they approach notes is completely different to how they used to approach notes. So if you look at the notes of granola users who use granola a lot, they only write a couple notes per meeting. And those notes are usually the internal thoughts that they had. So it's not the stuff that's in the transcript. It's like, oof, this person was a bit aggressive, or they seem kind of down, or
I'm concerned about this area because they didn't really answer my question. Like these things that are these really critical thoughts and then everything else is deferred to the AI transcription. And when they come back to use the granola notes, they'll oftentimes be chatting. So instead of reading lots of notes for a meeting, they usually have a specific question in mind or piece of information that they're looking for. And they find it more efficient to just ask that question and have a really high quality answer written for them.
Maybe now talk a little bit about that 5 to 100 of the vision of what this could become. I know you can only think a couple steps ahead with LLMs, but thinking two, three steps ahead, where do you think this goes next?
I think at the end of the day, the central question of what's the information I need right now to be able to make the best decision possible is a central one. There's this image, if you're a diplomat, you get this dossier before you go into a high stakes negotiation that gives you all the background information that was crafted for that moment. I think we're going to live in a world where everyone's getting those in real time whenever they go into any meeting.
I think the interesting questions are what context is useful for that? Is it just the previous meeting? Is it all your emails? Is it all the information in the world that goes in there? And then what's the actual interface of that look like? And I think my view for Granola right now, Granola helps you generate the best meeting notes out there.
But tomorrow, Granola should help you do all the work you want to do. So you walk out of a meeting and you need to write a follow-up email. You need to write an investment memo. You need to schedule an event with a whole bunch of different people. Granola or a tool like Granola, with all the necessary context, should be able to take you 80, 90, 95% of the way there. I think something that's very important to me and the folks at Granola is that
We see the role of AI as being a tool to make you better. We think you can use AI to replace a person or take away a task from a person, or you can use AI to augment a person's abilities, augment their intelligence and augment their abilities. And we're
We're really big believers in this idea of tools that make humans do more, achieve more, think more. And so everything we're building is this idea of, can you get Granola to do all the busy work of writing up that follow email, but then you add your judgment to it, which is actually what really matters here is this, and this is what's going to convince this person. So I'm going to twist it slightly as opposed to worry about all the specifics I need to get in there. I'm
I'm curious for some of the nitty gritty issues that you've encountered so far. Like one is just the recording aspect. How do you think the world will evolve and how do you handle it today where it seems to be becoming more and more normal that someone will ask to record a meeting and at first really turn me off and now it's just become normal. Do you think we reach a point where the assumption is just that everything is being recorded? And I know Granola handles this very thoughtfully. Maybe you should explain how you do it, but I also want to know where you think it's going.
I think as a society, we just need to be really thoughtful about the trade-offs is the answer. So I believe that in a couple years, maybe 18 months, speed of AI so fast,
Doing meetings, doing work without something like Granola will feel like such an impediment that no one's going to want to do it. So everyone's going to be using tools like this because they will be so useful. Now, there's a real trade-off, like you said, on invasiveness and privacy. And I think as a society, we need to thread the needle where you get maximum usefulness from these tools with the minimum opportunity.
amount of invasiveness. Where that line is going to be and how we navigate that, I don't know. I don't know where we're going to end up. When we first created Granola, we made a very conscious decision not to record and store any audio. Even though Granola is listening to the audio, it basically transcribes in real time, but it doesn't store any of the audio. And everyone kind of laughed at us for that. Why wouldn't you? Wouldn't the audio be useful? And of course it would be. You want to be able to go back and listen to
what exactly did someone say and what was their tone of voice. There's definitely a loss of value for the user because we're not recording the audio. But what it means, though, is that Granol is way less invasive than any of those other AI meeting bots that join your meetings. Those bots record the audio, record the video, and they store it. Who knows how long that stuff's around for?
And that feels completely different, in my opinion, than something like Granola, which is generating really nice notes and you have a transcript and it's super useful, but it's much less invasive and much less intrusive. I think there's a real question, which is, what's it like when we're walking around the real world? I think on a Zoom call is one thing. Usually there's a specific reason for meeting and
people understand with the context of that meeting and what the expectations are. I think the norms in our social lives will be very, very different than in the workplace is my guess. I don't know exactly where that will end up.
But I see a pretty stark distinction between in the workplace setting, most people want all these things captured because the AI can provide so much value to the user. Whereas in our social surroundings, I think that'll be a very divisive issue. We'll see how that goes. I could see it. I remember when Google Glass came out, there was a huge backlash. I could see a similar backlash happening when AI pendants start becoming popular. You have that one guy showing up at a party and he's recording everything and it pisses off everyone else.
How soon do you think it's the case that in-person work meetings have the same expectations as a Zoom meeting? I actually am already there. Like, I am already frustrated that with no nefarious intent, I just wish I had a memory assistant that was with me that I don't have to like think about notes. I can just be engaged in a conversation. I wish that was the norm today. And maybe there's just like a social thing that happens where you just...
decide at the beginning of a meeting, is this one recorded or not? And I wish that was easy. Like I would wear the pendant now just because I meet so many interesting people. I can't keep all this stuff straight in my head. I furiously try to take notes afterwards. It doesn't feel that different. When do you think we get there? Our iOS app is launching soon. Sam and I, my co-founder and I, we built this because we wanted it ourselves. We thought it would be useful. We
Quite frankly, we were really surprised by how it took off and by how once someone starts using Granola for all their important work calls, you basically are outsourcing some of your long-term memory to Granola. You start to have this expectation that you can go back and look up these important things from any conversation. Some of the most upset emails we get from users are saying exactly what you're saying, which is, "Hey, a third of my meetings are in person.
And I'm flying blind. I'm like naked in those meetings. I desperately need granola in person. I'm speaking about granola right now because that's what we're building. Maybe it'll be us, maybe it was someone else, but I guarantee you a tool will be used by everyone basically in this context. As to what the norms are going to be, I personally hate the idea of a hidden pendant that is listening to everything. I know in Silicon Valley, that's
one of the visions for the future. And I personally don't like that vision. I think in a work context, the phone is great because you basically put it down on the table and it is an easy social contract with the people in that meeting of what's happening. That's how we do work at Granola. Basically every meeting at Granola, it's very clear if there's a phone out and whose phone is taking notes. And I think the social contract really matters.
It's up to the individual to manage this as it's up to the individual to manage everything in the work environment. I think if you put the phone out and you're upfront about it, everyone benefits. And I think that change will happen, I think, much faster than you expect. Whereas I think in social circles, it will be very different.
One of the things that I'm so curious about right now in the world of AI application companies is this small team meme where some of the most incredible tools are built by teams smaller than 25 people. And as they scale their user base or their revenue, the teams are really not getting bigger. They don't need bigger teams. Can you describe what it's been like?
in all its aspects, abstracting away a little bit from the product itself, but just building a company in this space relative to prior companies that you built or were a part of in the pre-AI era?
The two defining characteristics that are different about this space in this moment are one, the speed at which the technology is getting better is nuts. And two, where Grinola is built on top of LLM, so it's an app layer product, we get so much benefit from writing
these incredible technological advancements that are happening at the LLM layer. So we spend a lot of our time really thinking about what makes a great user experience end to end. And if we weren't building on top of this foundational technical layer like LLMs, we'd need a massive team to be able to do what we're doing today. So we really do benefit from that.
That said, a lot of what makes Granola great is sweating the details of all these technical edge cases, stuff you'd never think of. It's like you're in the middle of a meeting and you take off your AirPods and it's on a Zoom call that has multiple channels. And all of a sudden, Granola needs to do something very specific to make that feel seamless that you never would have thought of until you built it and you realize it felt crappy if you didn't do that.
We use as many AI tools as possible for as many things as possible inside of Granola. But some of the tools, at least on the development side, aren't quite there yet. We're so close to take that end to end. So we still have to do a lot of work there.
Again, I hate doing time horizon guesses here because it's basically impossible to know. If you fast forward us three years, I think the way we would work and what we would be able to outsource to AI would be completely different. Is that mostly engineering challenges where you would expect that using cognition and cursor and whatever else your team would be able to effectively be like a manager versus an engineer and just tell it what to do and you wouldn't have to actually engineer the endpoints?
That's right. Our CTO, Voss, he has a goal. Basically, minimizing the number of lines of code every engineer writes at Granola every day is a goal of his. It's an active goal. We just did this off-site, and the theme was... So the theme was basically, use AI everywhere for things you wouldn't expect to. Just push ourselves outside of our comfort zone. And there's this great example. We were... I was trying to barbecue some shrimp for the team. We bought some shrimp. This was in Spain. I
I've never barbecued shrimp before. I'm typing into chat GPT like, okay, how do you barbecue shrimp? And Voss was like, no, give it the right context. So he's like, take a photo of the barbecue and take a photo of the shrimp. And he was totally right. So I was like, yeah, yeah, yeah, give it the context. So I did this. Turns out the shrimp was already cooked. We didn't realize it because it was in Spanish. So we didn't have to cook it at all. We just needed to heat it up, which never ever would have figured out if I had just typed it in.
An interesting point there is there's just a completely different intuition you need to have around how you use these tools and you build with AI. Perhaps in a similar way where the web came along and people pre-web wouldn't automatically default to using Google, they'd go elsewhere versus people who had grown up, were young enough when that happened, would always default to using Google. I think
There's going to be a very, very, very similar divide here, which is basically the AI natives will just understand what context they need to give AI and how to work with AI. And actually, when in doubt, you should probably give it more context and see what it's going to say, as opposed to like, assume you know, right? And I'm 38. I'm very happy the team is constantly pulling me like I'm literally at the forefront and thinking about this all the time. And I don't use AI as much as I should be using it.
If that's the case for me, think about the general population. Is one of the key lessons there that a lot of what needs to get built both technically and as like an expectation for people is context gathering tools. You're doing one obviously for conversation, and that's one mode of input that's really, really, really important, especially for work. How do you think we'll capture the rest riff on like context gathering as a function?
Gathering the context, just getting all the data is not that hard. It's only a matter of time before you can plug in all your email into Anthropic or ChatGPT and all your nodes and all your company documents and all your tweets. And it'll have all that. I think there's a different question, which is which of that context is really relevant for the thing I'm about to do right now. And that may be a technical problem. That may be a UI problem. I'd
I don't know. So that's on the context side. I do think a huge blocker for unlocking the power of collaborating with AI is what's the UI? What's the interface for collaborating with UI? I really think we're in the terminal era with multiple computers where you type in a command and then the computer would literally spit back a command. The way we work with
chat GPT. I don't think chat's going away, but I think it will feel archaic in how little control you really have as a user. I was looking this up. I was trying to find an analogy for this. The first cars that came out, they didn't have steering wheels. They had basically a stick that you could turn like left to right. And it was fine if you were trying to go really slow. The moment you went fast, the stick was unusable. You'd move it too much and you'd crash off the road and it was a big security problem. And then finally, someone came up with a steering wheel. And a steering wheel is a UI that gives you
so much fine-grained control when you're trying to turn. And I think we still have to invent what the steering wheel is for when you're working with AI and collaborating with AI. Right now, we have some very coarse controls and it's turn-taking right now. It's like I write something, then the AI does something, then I react back to it. And I think it's going to be a lot more fluid and a lot more collaborative once we figure that out. Bring that to life a little bit more for me, the fluidity aspect.
How could you imagine that being like versus the back and forth? It depends on the tool, but right now it doesn't feel like you and the AI are working on the same canvas. It's like we're working on two separate canvases next to each other. This is a very basic thing, but when you're using ChatGPT or Claude, you can't go and edit the response
that the AI gave you. You don't go in there and be like, oh, actually, you know, this point was dumb and let's change the language here. You tell it, please make it shorter as a command and you hope that it rewrites it in the right way. And that's just going to feel like madness not too long from now. I guess there's a historical parallel here. These things feel very obvious once they're
invented early computing days when you were in a text editor, the first text editors, there's this idea of modes. So there's a mode where you're like text insertion mode and you'd go in and you'd write some words and then you'd exit that mode and then you'd go into deletion mode or copy mode. You'd have to enter that mode and make that change.
And then like Tesla basically went on a vendetta to change this. So now it's actually, you should be able to type and delete and cut and copy and do all that fluidly without entering different modes. And that was unthinkable before we made that jump. So it's kind of hard to imagine what that's going to be for AI. I guarantee it'll feel completely different than what we have now. I think granularity of control and speed of collaboration are the two things that are going to go way up. So it should be way more fluid.
Have you been surprised by any of the ways that users use granola? There are a few things that have jumped out. One is the variety of use cases people use it for.
We built it for work meetings. Very quickly, people started telling us, my partner has cancer. We have all these meetings with doctors. Granola has become absolutely invaluable in that process. I actually don't know I would have managed it before. There's the use case thing that was unexpected. Then the other thing is people are finding creative ways to get more context into granola that is just not designed for. And this is the...
I am brainstorming, brainstorming an idea, and I'm just going to create a meeting in Granola, like a note in Granola, just talk to myself. Or I need to plan out my day. So I'm just going to talk about the different things going on and then use Granola to help prioritize what I'm doing. Or I'm watching a YouTube video on a subject I'm trying to learn. I have Granola open and I'm taking notes in there because of that. That's probably the biggest surprise. The other behavior change, I think I mentioned this before, is that when people go back,
in granola less and less they read the notes that were there and more and more they ask the granola chat what they're looking for as an app builder what is your perspective on the battle between model providers for your attention and business it's the best thing ever
It's fantastic. I fully support it. For us, we build on top of foundation models and the speed at which models have gotten better over the last three years is incredible. And I believe that companies like Granola benefit tremendously from the competition between the providers and
As a result, I think users are benefiting tremendously. How is it built? Are you sort of hot swapping the best model in and that's just something that you could do in the morning every time, you know, Anthropic apparently is coming out with this new model soon. Will it just be a function of like a quick eval and then hot swap that thing in as the primary driver and then switch again in the future if a new one comes out? Is it that simple? That's exactly right. I think eval is not simple, but what you described is exactly what we do. We don't just use one model in one place. We use lots of models in lots of different ways. And
inside of Granola, but we will switch to whatever the best model is on any given day. And how do you think about the competitive dynamics of what you're building versus what might be achievable through using a model directly alone? Everyone always used to ask, won't Amazon just build this or won't Google just build this? Now it's like, won't Anthropic just build this? How do you think about building in such a way that's protected from the future in which the model companies come to eat your lunch directly?
So I don't have a crystal ball here, but here's the way I view this. There may be two axes here that matter. One is how common is this a use case for me? Is this something I do like once a month or twice a month? Or is this something I do 500 times a day? And two, how great do I need to be at this task? And I think everything that is low frequency where you don't need to be great at it will be eaten up by the general system.
And I'd say most consumer use cases actually fall in that quadrant because it's basically impossible to build a habit to use a new tool on a low frequency use case. And if it's something where you just need it to be like pretty good, then a universal assistant like Claude is perfect. And actually, the more you use that, the better that assistant will get for you.
I think the other end of that quadrant is basically high frequency use case where your output needs to be really, really good. And that's basically the power tool quadrant. There'll always be that pro tooling for the people who really want to do a fantastic job at something. I think that's where granola sits. And you might be like, oh, but why can't the general system do that as well? If the model just gets smart enough.
And my answer there is it's not a question of intelligence. It's actually how great is the UI optimized for this use case. And I think that if you have a product that is solely dedicated to being phenomenal at that use case, it will be a better experience than a general tool will be. So I think the limitations there, what separates that is really around the product design and optimization of
the user experience, not of the underlying technology. Do you have like a crystallized product philosophy that guides your decisions? My personal approach, you can boil down most great product thinking and design to a very simple question, which is when you use a product, when you look at it, really ask yourself, how does this make me feel? And just keep asking yourself that question and
and really, really, really listen to the answer. And then once you've done that a hundred times, put that same product or UI or button in front of another person, just ask them that question over and over. And I think when you do that, you realize within the first, I don't know, 500 milliseconds, when you look at a product, you feel like 10 things. And oftentimes those things tell you exactly what
Oh, it's too complicated. It's too cluttered. I don't know what to do. It makes me feel insecure. There are so many emotions and they go by in like a flash of an instant. If there's an emotional recorder and you could play it back in slow motion, that would tell you all you need to do to make your product great. So there are lots of other things that matter, but I feel like that one question is an incredible guiding force.
You gave the personal. Is there anything that's different about the granola-specific product philosophy? The granola-specific one is all about giving the user control. Granola is a tool to make you better, which means you drive the tool. And every decision we make ties back to that in one way or another. Even the most basic one, it is an editor. Most AI apps that generate notes are
They don't generate the notes in an editor where you can edit them. They give you a PDF kind of thing or like an email. Here are the notes. There are tons of micro decisions that I'll map to that idea. Are you at all surprised by who your users are, what types of jobs they do, or do they tend to cluster in a couple sectors? What have you learned just based on the raw data of who they are?
This is actually pretty interesting and it might have implications. The people who use us are the people who are AI forward. So it's folks who are leaning into these new tools, these new ways of doing work that interestingly maps to a ton of founders, a ton of investors, and a ton of people across all disciplines that are working in the AI space. So the number of AI startups, we're
where the marketing person is using granola is extremely high. It's interesting. It's interesting how there's a very stark line between the people who are leaning into these tools and those who aren't. Yeah, it makes sense, right? It's very much in like the Jeffrey Moore, like crossing the chasm, early adopter, natural early adopters. One of the weird things I remember when that happened was, so we launched granola in May. So it was like eight months ago, nine months ago. And I've been building product for a long time. This was surreal though,
We launched it. We were happy that some people tweeted about it. It wasn't like a crazy big launch or anything. And we just expected to keep building. And then a few weeks later, these really famous CEOs who we did not know just started tweeting and then DMing me on Twitter a whole bunch of product feedback that they wanted.
It clearly resonated with a very specific type of persona. And then that persona was really loud on social media. My Twitter direct messages basically became a customer support channel for CEOs of like big tech companies, which is like a really weird experience. That's what I did to you. Like it's the same exact thing. This is such an interesting way to meet people quickly. I'm curious in this whole building process, is there any plot twist that you look back on that turns out in hindsight to have been
a blessing or a gift in your whole product building experience? We made the, at least with Granola, we made the decision early on to make Granola a Mac app, like an app that sits on your computer rather than a bot that joins meetings or something on a website. There are lots of different ways you can build it. And that was a huge pain in the butt for
for a whole bunch of reasons. Like when we started off, it was only possible to do what Granola does for users who are on like Mac OS 13.4, which was I think 15% of Mac users at the time
And the reason we did it, again, was this idea of we want it to be like a notebook and a pencil. We want you to be able to grab granola and use it no matter where you are, whether you're on a Zoom call, in-person meeting, on a huddle and slide. We don't want you to have to think about it. An important thing about a tool is that it is reliable and it works in a consistent way so you know how to use it.
There have been so many downstream great things about being a Mac app, being an app on your computer. It's so much more immediate and in your control.
And it's so easy to get to. Like basically the way people use Granola, which I'd say is quite intimately, I think is largely a function of the fact that it's an app on your computer rather than a tab lost within 50 other tabs on your website that you have to find. So I think we can take a little bit of credit for that, but I think that was a way better decision than we realized at the time. Has the process made you change your mind in a major way?
about anything? Yeah. So when we started off building Granola, we had a completely different interaction pattern in the app. So the thing we pitched and the first version we built was very different. You would type in a keyword or two in Granola in real time, and you'd hit tab, and then Granola would write the full note for you in real time. It's a really cool demo. It felt kind of magical when you used it. I'd say something like backup,
You type in Mac app and hit tab and then you'd write this like Chris is really glad that he made the decision to build a Mac app. And then basically spent six months trying to make this work. And we just couldn't. What we found out was that no matter how great the notes we wrote were, if a computer is writing notes for you real time during a meeting, you can't help but read it. And what
What ends up happening is it's incredibly distracting. The whole point of granola is you can be more present in the meeting. And what was happening was the exact opposite was going on. People were just looking at the notes. And if they were not exactly how they wanted, they're editing the notes. And then they realized they had not been paying attention to the person speaking.
And it was just really bad. So we ended up completely changing the interaction pattern to being something way more mundane, which is during the meeting, it works just like a regular text editor, like a notepad, you type stuff, and then all the magic happens at the end, which means that the magic moment, the value of granola, you only realize after you've used it for a whole meeting, which is not great. Ideally, when you're building a product, you want that magic moment to happen in the first 20 seconds. It just made it a way better product. Like
Like I said, we spent six months trying to make this wrong thing work. And so finally we kind of accepted that there was a better way to do it. If you think about the model providers as one vector of competition for the job to be done, how do you think about the other vector, which is other app builders and the ways in which
how you architect the product might defend you because it's becoming more and more sticky and valuable to the user or something. So that even if another granola 2.0 comes out, that's a little bit better. They're not going to adopt it. Do you think a lot about that sort of thing, even though you're super, super young and I'm sure mostly just focused on building something great for users? Does that line of thinking enter your mind? I think the only answer here really is you need to build something
something better than other people faster. In this space, there are switching costs, there are small moats, but I think the only way you win is you need to consistently build better stuff than other people faster than they're building it. And doing that in a space that's moving this quickly, it's not a small feat. Something we talk about as a team all the time. I think something like Granola, there's an inherent switching cost because the more context Granola has, the more useful it's going to be for you. It's
So something that have to be much better, I think, for someone to switch off of granola. But I think you get complacent for three months. You're in trouble in this space. Tell me how you do that with your team. So I've heard a few different fascinating methods for engineering product velocity in a company, building an app on top of AI. How do you think about it and do it? What's worked? What experiments have failed? Like, how do you engineer product velocity? Something we're pretty explicit about is knowing...
when we're working on a feature, are we in exploit mode or are we in explore mode? Because you need two completely different approaches to that. So what that means is, do we know what needs to be built here? Is there a clear idea and it's just about executing it as quickly as possible? Or do we not know what the answer is here? Is this like an unsolved open problem where you need to do some exploration?
first and then figure out what the right solution is. For the one where you know what you need to build, at least from our experience, it's the basic advice that everyone hears, which is build the minimal thing as quickly as possible. Give yourself deadlines where you will ship it to real humans, maybe not to everybody, to real people, and then try to increase the shipping iteration speed as quickly as possible.
I think we've gotten in trouble before, and it's easy to, is you don't know what mode you're in and you use that philosophy to the open-ended problem. And then what ends up happening is you end up shipping something crappy to people and you ticked it off. You're like, oh, we shipped it in two weeks. This is great. But actually, you didn't actually solve the problem to be solved. The thing you did was you shipped as opposed to figure out what is a great solution for people and do that. And interestingly, I'd say that is extra important in this space because there's
So much pressure to move quickly, but every now and then taking extra time to think about how to do this is really important. A good example is we were working on Granola for a year before we launched. And we're so late to the AI note-taking game already. We were seven years late when we founded Granola. We didn't launch for a year. You know how I talked about that interaction? Like we completely changed the core interaction of the product. If we had launched that publicly, we never would have been able to switch it.
There's no way, because users would have learned a new behavior. Users would have said, oh, this is cool. The ones who we would have retained would have liked it, but we wouldn't have retained that many users.
That would have been it. I think that's very important to kind of protect your ability to change direction with the product until you have a lot of confidence that you're in the right direction. And how do you manage that while also moving really quickly in a fast-moving space? I mean, that's the whole challenge. How do you think about dialing your own degree of ambition? Like if it's one through 10, where do you think it is? And has it moved a couple points up since you started? What is the process of sussing out and dialing one's own ambition? How have you experienced that?
I asked myself if we're doing this correctly every day. Sam and I, when we started playing with LLMs, we became convinced that all the tools for work that we use are going to be rebuilt or reinvented on top of LLMs. And we became convinced that there's going to be like this new class of software. In the same way that if you were a developer, you probably spend all day in Cursor or Visual Studio, like some IDE. We think that there's going to be a new class of software. It doesn't have a name yet.
where people like you and I will spend all day in and we do our work in. Folks whose jobs revolve around people and communication and projects and meetings and all that, there's going to be a new workspace for those folks. And that's what we set out to build from day one. And that's exactly what we're setting out to build now. I think the interesting question for us is, it's really important if you're not an open AI or an anthropic,
that you are really, really good at a use case today. You can't just be building a fantastic product in the future. You need to be damn useful at a very specific thing today. And every step along the way, you need to be super useful to people. And I think there's a real tension there, which is how much time do you spend building the next obvious five things that are going to be real useful to people versus you take the big swing. And for us, we want to move from a world where you use granola for
for notes to use granola to do most of your work. If you're writing a document or a memo, it should be way easier to do that in granola because of all the context that we have about the work you're doing that's related to that. But that's a really big swing. Getting that right is going to take a lot of work and a lot of iteration.
If you think about existing companies that do aspects of what Granola does better now or may do in the future, what are the ones that you think about the most of if you were a VP at one of these companies, like you should be worried about major disruption that's coming? My view on this is you can worry about a million things. You should choose selectively what to worry about because there are very few things out of your control. And the competitor that we have chosen to worry about at Granola is the one that hasn't launched yet.
It's the startup that can look at what we figured out, what other people figured out and start at that point and execute on that more quickly than us.
That's what we're thinking about. I was surprised at how quickly the big tech companies reacted to AI. Like there's this moment, I think like ChachiBT kind of went mainstream and then you saw every big tech company pivot and try to adapt to that strategy. So I was impressed by the leadership there. I think just because you choose to do something doesn't mean it's easy for you to execute on it. So one of our investors, he has the saying, which is, if you list out all the AI features,
that you use on a daily basis, how many of them were built by big tech versus how many of them were built by startups? And I think a surprising number of those were built by startups, even though every big tech company is out there investing a tremendous amount of money to build AI features. So does that get figured out over time? Maybe. Startups are oftentimes the R&D wing of all the big tech companies. And then once something's figured out, they can incorporate that to their large user bases. But generational companies, they figured something out earlier and they were able to leverage that into becoming something massive.
If I was forcing you to put your like mega dreamer hat on, set aside feasibility as part of your consideration in this, what do you dream most about tools being available five years from now, 10 years from now as tools for thought that we kind of opened our conversation with?
I want tools that make us more human and better humans. And by that, I mean tools that kind of unlock our creativity, unlock our ability to just basically do all the things that humans are incredible at that no one else can do. And I think the people who are building tools with AI need to be very intentional about that.
Because I think there's a fine line where you want to outsource all the rote work, all the boring stuff, the mindless stuff, but you really don't want to outsource the judgment. When you were talking about generating ideas and you're asking AI to generate 100 different ideas and you can choose the right ones, that's great.
There's a danger, though, that that's what everyone is doing. And now we're only looking at the ideas that are coming from AI. And that's just one example, but that trickles down to everything. It's like, oh, okay, well, this idea of writing is thinking. And if AI is doing the writing for you, well, a lot of that writing is just rote work. There's no value in any way. But some of it is where you do your thinking. And if you're not careful about what you outsource, I think there's real danger there. So the tool that I would want would be one that
Right now, we have so many silos of information and so many silos of where knowledge or inspiration or information comes from. And oftentimes, I'm only really looking at data or information from one of those silos when I'm thinking about a topic. And what I want is a tool that will pull out the most relevant and best stuff from my personal life and my context, but also out there that humans have figured out and
and present that to me dynamically on the fly in a way that I can interpret and make use in real time. What that looks like, I don't think anyone knows. I saw this amazing demo a friend of mine made. There's this microphone hooked up to something like Mid Journey, but it was running at something like, I think, five or eight frames a second.
And what it was doing is like real time you were talking about, like for this conversation, it would be projecting on the wall imagery that was related to what we were talking about, but slightly divergent. He was using this from like a Burning Man creative experience. But you could imagine something like that in a work context where it's like it's helping you think out loud, but it's also extending and bringing in ideas or useful information that you wouldn't have had otherwise. I
I think doing that in a way that's helpful and not distracting is actually really, really hard. And there are a lot of these ideas in sci-fi that sound fantastic and then in practice don't work for really silly tactical reasons.
like the notes being written for you in real time being distracting. I think there's like a lot about the human experience that defines what works and what doesn't. I can talk about this for hours. I gush on it. I just think it's such an incredible moment to be alive and to be building things. Mickey Malka, the great investor, has an art installation that does what you just described, whereas you talk in the conference room, it visualizes what you're talking about. It is quite distracting, I will say, in a good way. You sort of like can't look away from it. It's just so mesmerizing. But to extrapolate that, which is
I saw that six months ago or something. These things get better at an alarming pace. One question is always, what are these models bad at? Everyone's very bullish. Everyone's very excited. They're great at a million things. They're going to get better and better. Everyone, I think, is coming around to that. Is there anything that across the model generations you've been surprised that aren't getting better? Things that they just don't do well and consistently haven't done well that are real limitations?
I think it's good to separate the reality today from what's a reality that will persist, what's a limitation that will persist in the future. It is surprising to me how unpersonalized any of these models feel today. If you ask it a question, because I ask it a question, the answers are going to be
identical or almost identical, given we're X number of years into this cycle, I think that's really surprising. This is a small thing we do at Granola that people like, but if you were using Granola in, let's say, a meeting and I'm using Granola in that meeting, your notes and my notes would look completely different. And that's just because we built it that way. We're like, okay, the things that matter to Patrick in this meeting, we think are this, the things that are going to matter to Grace are this. But the low level of personalization is surprising to me.
What advice would you have for investors? You've raised money from great investors and I'm sure talked to a ton. Most investors in the technology world and in private markets are mostly or entirely focused on investing around this wave of AI technologies.
And so I think they're all trying to answer the question, what is the best, most productive way to interact with company founders and new applications and all that? I'm curious what advice you would give to those people that are trying to do their best job of allocating capital to the highest and best use. What would you tell them? And maybe the way to answer is like, what are the best investors you've encountered done with you? And what are the worst ones done that we could avoid?
I'm not an investor, so it's hard for me to give advice to investors. I can tell what speaks to me. So the same way I talked about when you're building a feature, you need to know, is this exploit mode or an explore mode? I think AI as a whole is an explore mode problem.
No one knows what the right thing is. I think maybe foundation models are like now more in an exploit mode, but everything else, especially at the app layer, total explore. When you're in explorer mode, you need to have a certain sensibility there, which is in my opinion, very product centric and a certain exploration and depth of thought around what's actually going to be a good product or good for people. And
Not many investors talk about that or think in a deep way. The stuff that stands out from the noise for me, there've been some really good ones, but if I get a cold email and they write a very like specific insight about their usage or like product behavior in the space that they've thought about, maybe Granola gets right or we get wrong.
That really makes me pay attention because if something's hot, you just get inundated with messages. My inbox is hard to manage right now. And that's just because AI is exciting right now. It may not be exciting tomorrow.
And what at least I want when I partner with an investor is I want a partner I'm going to work with for a very long time. And I want us to agree on an outlook on the world and how we think about a problem. All the specific executions and all of that's going to change to an adapting world. But do you have a similar worldview on how you should go out and solve problems? I know that's a very generic answer, but I have that with my investors. I
I think they're great product thinkers and I think they can engage at a bunch of different levels, which is a huge unlock. If I forced you to build something else in this space, Granola ceases to exist and you're not allowed to build Granola 2.0, what's your instinct on where you would go get into explore mode?
Before I started Ganoa, I was thinking about what I should start. My previous startup was an education, AI education app called Socratic. And everyone was like, oh, why don't you go into education? And I was like, oh, I think there are a whole bunch of reasons why I don't want to start another education company or an education AI company. But I've been playing with GPT, GPT-4 voice mode. You know, the one where you can, the Scarlett Johansson voice thing? Yeah. With my kids, you can actually turn the camera on there and they were playing hide and seek.
with ChatGPT, which is kind of nuts. My kids are five and seven. They were like hiding behind the table and then peeking out. And she would be like, oh, I can see tutoring is one thing or what's going to help you get good grades. But that interaction was something that caught me off guard. I just haven't seen an interaction like that between a kid and technology. I don't know what the product would be, but there's definitely a there there. And I think the way you design that really matters.
What's hard about education? What did you learn building Socratic that you'd caution others building in that space or encourage them? The holy grail in ed tech is
is basically building one-to-one tutoring. There are all these studies that show if you have a one-to-one tutor, the median student actually performs the top five or 10 percentile student. And that's kind of been true in history. A lot of the great people we read about in history books had like, was it like Peter the Great had Aristotle as a tutor? I mean, of course you're going to do well. That's an unfair advantage.
So I think that's like the holy grail. Everyone wants to have a one-on-one tutor. It should be free. It should just be an open source model. It should be free. Everyone should build on top of it. It's just better for everybody. I don't want to build a business there. The incentives around making money in that space versus I think what we kind of want for society aren't super aligned. And I think you're also going to get competition from the generic assistants. As you were asking before, like what kind of use cases are going to get eaten up by the chat GPTs of the world?
And I think most of education will fall under that category. Can you imagine a successful tool that doesn't have a data advantage, either unique data that it has access to or first party data like you've built, where as a person uses it, they're building a data set basically that's custom to them. Is it possible to imagine like a data-less AI application that is nonetheless still very successful? Or do you think data is just an absolutely critical component of education?
sustainability and edge. A lot of this data is you don't need that much of it anymore. And getting a little bit of data is not that expensive or not that hard. The way the world's going is you get these foundation models.
that can understand the world and kind of do a whole bunch of different things. And then with a little bit of data on top of that, you can really hone it into a use case. Whereas before, like an old machine learning paradigm, you'd need millions and millions and millions of examples of something. Now it's kind of crazy we can get away with 50,000 examples. And even if it's a very expensive data type to get, 50,000 is not that hard. I think about what kind of data is ungettable.
So I don't know. I guess I'm kind of split. This idea that soon anyone is going to be able to build apps, I think that's going to happen, and I think that's going to happen relatively soon. It's less clear to me what the effects on the world are going to be. I've been thinking about historical examples. Does that make people who are really good at building apps less valuable or more valuable? And I don't actually know. The beginnings of photography, it was almost impossible to take a photo, right? If you just had a camera, that's it. You're winning. And then...
Cameras became more accessible, but they're still expensive and you have to spend a lot of time to get good at it and different lenses. And then everyone has like a phone in their pocket. In a lot of ways, okay, everyone's a photographer and it's amazing what people can do. At the same time, I feel like there's a premium on taste now. If you actually are really great and you can stand out in that, it's almost like you're more valuable. I'm curious what you think. What's going to happen with software? What's going to happen with apps? Is that it? It's like... I think about this a little bit like music.
I would be surprised if in the future everyone just has all their own music. I think there's some shared consciousness, shared experience thing that matters for how good something is. In the same way, like there's a social proof thing or something like the wine studies where the label and knowing how much it costs makes it taste better. Knowing how popular a song is might make you like it more. And maybe something similar applies to software. Of course, I don't know, but it seems hard to imagine that everyone's going to have the...
will and interest to build their own version of an app versus just being lazy and clicking the app that everyone else uses that's not entirely perfect for them. But I don't think everyone's going to be an app builder in the future because not everyone's an entrepreneur now. With Stripe Atlas and cloud providers and all these things, it's massively easier to be an entrepreneur and not everyone's an entrepreneur. That's what I think. I think the future will often be lots like the past.
And it's really exciting because I can't wait to build some stuff with it. That's my tendency. And other people have different tendencies. I don't know. We'll see. Thankfully, people like you are building this stuff that's going to make it possible. Another question that brings to mind is just, we talked about earlier, the small team meme, how many people are going to be required to build very big businesses. Can you imagine a world where Granola has a thousand employees? Is that...
Is that still going to be... I mean, it is a thing objectively, like there's plenty of AI companies that have big employee bases. But for you specifically, maybe are we entering a zone where there could be a $10 billion company that has 20 employees or something like that? I think so. Here's a very real example for us. We just made our first customer experience higher. We have lots of people writing in and we interviewed a ton of candidates. And...
I'm pretty convinced that you're going to be able to look at a company and say, was their customer experience department created before or after 2025? Maybe this is the year. And the ones post 2025 are going to look completely different. They're probably going to be a lot smaller in terms of people, the way they use tools, and what those people do will be very different. I think the departments that are created before will have trouble. It's much harder to change something that's existing than to build something from scratch on a new paradigm.
We're very ambitious at Granola, so I think we're going to need a lot of people. But I think it might be you read about these companies that have thousands and tens of thousands of employees. The world in which that's necessary at Granola is very small. This has been so much fun. I'm so interested in what you're building, how you're building it. I think it's such a great example of new things that are possible and how those are being built in this new world.
Thank you for doing this with me. When I do interviews, I ask everyone the same traditional closing question. What is the kindest thing that anyone's ever done for you? My dad spent a lot of time giving me a lot of feedback on things, oftentimes critical. I always felt very loved and supported, but oftentimes quite critical. And now that I'm in his shoes with my kids, I realize just how hard and tiring that is. And
There's not a lot of upside for you as an individual to do that. Sometimes something just needs to be said to someone and there's a lot of upside for the individual who gets the feedback and only downside for the person giving it. I appreciate just how hard that must have been and how kind that was because it was really all for my benefit. How do you think you're the most different in terms of like how you think behave than you would be had he not done that?
I think I have a much more honest assessment of myself. People talk about first principles, and I think that phrase gets overused. It's easy to hide behind justifications or philosophies to feel good about something. But I think oftentimes the reality is pretty straightforward. I can hold his voice in my head quite often, which is interesting because he never wasn't an entrepreneur. He never worked in tech, none of that stuff. But the amount of times I hear his voice being like that,
That sounds like bullshit. Maybe it's bullshit I'm telling myself or it's something someone else is saying. It's in there a lot. Maybe in closing, how does all that translate into how you articulate the why behind building Granola?
The most honest answer to that is that it's a very personal thing. I am happiest when I am trying to build something that I believe in and that I think is important. And I'm pretty unhappy when I'm not. I'm just wired that way. And I don't know, I think a boss I had early in my career puts his philosophy and he was like, Aristotle believed in the active realization of human potential. That phrase stuck in my mind. When do I feel like my time is well spent?
Do I feel like I'm actively trying to realize my potential, but also humanity's potential? And I think that comes for me primarily through my work, but also as a parent, which is something I didn't expect, but kind of makes sense now that I'm on the other side.
A beautiful place to close. Chris, thanks so much for your time. Thank you, Patrick. If you enjoyed this episode, visit joincolossus.com where you'll find every episode of this podcast complete with hand edited transcripts. You can also subscribe to Colossus Review, our quarterly print, digital and private audio publication featuring in-depth profiles of the founders, investors and companies that we admire most.
Learn more at joincolossus.com slash subscribe.