Our goal with Cursor is to invent a new type of programming, a very different way to build software. So a world kind of after code, I think that more and more being an engineer will start to feel like being a logic designer. And really it will be about specifying your intent for how exactly you want everything to work. What is the most counterintuitive thing you've learned so far about building Cursor? We definitely didn't expect to be doing any of our own model development. And at this point, every magic moment in Cursor
involves a custom model in some way. What's something that you wish you knew before you got into this role? Many people you hear hire too fast. I think we actually hired too slow to begin with. You guys went from zero dollars to 100 million ARR in a year and a half, which is historic. Was there an inflection point where things just started to really take off? The growth has been
fairly just consistent on an exponential. An exponential, to begin with, feels fairly slow when the numbers are really low and it didn't really show off to the races to begin with. What do you think is the secret to your success? I think it's been...
Today, my guest is Michael Truel. Michael is co-founder and CEO of AnySphere, the company behind Cursor. If you've been living under a rock and haven't heard of Cursor, it is the leading AI code editor and is at the very forefront of changing how engineers and product teams build software. It's also one of the fastest growing products of all time, hitting 100 million ARR just 20 months after launching, and then 300 million ARR just two years since launch.
Michael has been working on AI for 10 years. He studied computer science and math at MIT, did AI research at MIT and Google, and is a student of tech and business history. As you'll soon see, Michael thinks deeply about where things are heading and what the future of building software looks like.
We chat about the origin story of Cursor, his prediction of what happens after code, his biggest counterintuitive lessons from building Cursor, where he sees things going for software engineers, and so much more. Michael does not do many podcasts. The only other podcast he's ever done is Lex Friedman, so it was a true honor to have Michael on.
If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. Also, if you become an annual subscriber of my newsletter, you get a year free of perplexity, linear superhuman notion, and granola. Check it out at lenny'snewsletter.com and click bundle. With that, I bring you Michael Trull.
This episode is brought to you by EPO. EPO is a next-generation A/B testing and feature management platform built by alums of Airbnb and Snowflake for modern growth teams. Companies like Twitch, Miro, ClickUp, and DraftKings rely on EPO to power their experiments.
Experimentation is increasingly essential for driving growth and for understanding the performance of new features. And EPO helps you increase experimentation velocity while unlocking rigorous, deep analysis in a way that no other commercial tool does. When I was at Airbnb, one of the things that I loved most was our experimentation platform.
where I could set up experiments easily, troubleshoot issues, and analyze performance all on my own. EPPO does all that and more with advanced statistical methods that can help you shave weeks off experiment time, an accessible UI for diving deeper into performance, and out-of-the-box reporting that helps you avoid annoying, prolonged analytic cycles.
EPPO also makes it easy for you to share experiment insights with your team, sparking new ideas for the A-B testing flywheel. EPPO powers experimentation across every use case, including product, growth, machine learning, monetization, and email marketing. Check out EPPO at geteppo.com slash lenny and 10x your experiment velocity. That's geteppo.com slash lenny.
This episode is brought to you by Vanta. When it comes to ensuring your company has top-notch security practices, things get complicated fast. Now you can assess risk, secure the trust of your customers, and automate compliance for SOC 2, ISO 27001, HIPAA, and more with a single platform, Vanta. Vanta's market-leading trust management platform helps you continuously monitor compliance alongside reporting and tracking risks.
Plus, you can save hours by completing security questionnaires with Vanta AI. Join thousands of global companies that use Vanta to automate evidence collection, unify risk management, and streamline security reviews. Get $1,000 off Vanta when you go to vanta.com slash lenny. That's V-A-N-T-A dot com slash lenny.
Michael, thank you so much for being here. Welcome to the podcast. Thank you. It's great to be here. Thank you for having me. When we were chatting earlier, you had this really interesting phrase, this idea of what comes after code.
Talk about that, just like the vision you have of where you think things are going in terms of moving from code to maybe something else. Our goal with Cursor is to invent sort of a new type of programming, a very different way to build software that's kind of just distilled down into you describing the intent to the computer for what you want in the most concise way possible and really distilled down to you just defining how you think the software should work and how you think it should look.
And, yeah, with the technology that we have today and as it matures, we think you can get to a place where you can invent a method of building software that's legion's higher level and more productive, in some cases more accessible too. And that process will be a gradual moving away from what building software looks like today.
And I want to contrast it with maybe like the vision of what software looks like in the future that I think a couple of visions that are in the popular conscience that we at least have some disagreement with. One is there's a group of people who think that software building in the future is going to look very much like it does today, which mostly means text editing, formal programming languages like TypeScript and Go and C and Rust.
And then there's another group that kind of thinks like, you're just going to type into a bot, and you're going to ask it to build you something, and then you're going to ask it to change something about what you're building. And it's kind of like this chatbot, Slackbot style where you're talking to your engineering department. And we think that there are problems with both of those visions. I think that on the chatbot style end of things, and we think it's going to look weirder than both. The problem with the chatbot style end of things
is that lacks a lot of precision. If you want humans to have completely, you know, complete control over what the software looks like and how it works, you need to let them, you know, gesture out what they want to be changed, you know, in a form factor that's more precise than just, you know,
change this about my app, in a text box removed from the whole thing. And then the version of the world where nothing changes we think is wrong, because we think that the technology is going to get much, much, much better. And so a world after code,
I think it looks like a world where you have a representation of the logic of your software that does look more like English. You have kind of written down, you can imagine in doctrine form, you can imagine in kind of an evolution of programming language towards pseudocode, you have written down the logic of the software and you can edit that at a high level and you can point at that. And it won't be kind of the impenetrable millions of binds of code. It'll instead be something that's much terser and easier to understand and easier to navigate. But that world where, yeah, the kind of crazy
crazy hard to understand symbols start to evolve towards something that's a little bit more human readable and human editable is one that we're working toward.
This is a profound point. I think I want to make sure people don't miss what you're saying here, which is that what you're envisioning in the next year, essentially, is kind of when things start to shift is people move away from even seeing code, having to think in code in like JavaScript and Python. And there's this abstraction that will appear, essentially pseudocode describing what the code should be doing more in English sentences. Yeah.
Yep, we think it ends up looking like that. And we're very opinionated that that path goes through kind of existing professional engineers. And it looks like this evolution away from code. And it definitely looks like the human still being in the driver's seat, right? And the human having both a ton of control over all aspects of the software and not giving that up. And then also the human having the ability to make changes very quickly, like having a fast duration loop and not just like, you know,
having something in the background that's super slow and takes weeks. Go do all your work for you. This begs the question for people that are currently engineers or thinking about becoming engineers or designers or product managers, what skills do you think will be more and more valuable in this world of what comes after code?
I think taste will be increasingly more valuable. And I think often people think about tastes in the realm of software. They think about, you know, visuals or taste over smooth animations and, you know,
coloring things, UI, UX, et cetera, on kind of the visual design of things. And I think more and more, and you know, the visual side of things is an important part of defining, you know, a piece of software. But then, as mentioned before, I think that the other half of defining a piece of software is the logic of it and how the thing works. And we have amazing tools for speccing out the visuals of things. And then when you get into the logic of how a piece of software works,
Really the best representation we have of that is code right now. You can kind of gesture at it with Figma and you can gesture at it with writing down notes, but it's when you have an actual working prototype. And so I think that more and more being an engineer will start to feel like being a logic designer. And really it will be about specifying your intent for how exactly you want everything to work. And it will less be about, it'd be more about the what and a little bit less about the how exactly you're going to do things under the hood.
And so, yeah, I think taste will be increasingly important. I think one aspect of software engineering-- and we're very far from this right now, and there are lots of funny memes going around the internet about some of the trials and tribulations people can run into if they trust AI for too many things when it comes to engineering around building apps that have glaring deficiencies and problems and functionality issues. But I think we will get to a place where
you will be able to be less careful as a software engineer, which right now is an incredibly, incredibly important skill. And yeah, we'll move a little bit from carefulness and a little bit more towards taste. This makes me think of vibe coding. Is that kind of what you're describing when you talk about not having to think about the details as much and just kind of
going with the flow. I think it's related. I think that vibe coding right now describes exactly kind of this state of creation that is pretty controversial where you're generating a lot of coding, you aren't really understanding the details.
That is a state of creation that then has lots of problems. By not understanding the details under the hood right now, you then very quickly get to a place where you're kind of limited at a certain point where you create something that's big enough that you can't change. And so I think some of the ideas that we're interested around, how do you give people continued control over all the details?
when they don't really understand the code. I think that solutions there are very relevant to the people who are vibe coding right now. I think that right now we lack the ability to let the tastemakers actually have complete control over the software. And so one of the issues also with vibe coding and letting taste really shine through from people is you can create stuff, but a lot of it is the AI making decisions that are unwieldy and you don't have control over.
One more question along these lines. You threw out this word taste. When you say taste, what are you thinking? I'm thinking having the right idea for what should be built. And then just it will become more and more about kind of effortless translation of here's exactly what you want built. Here's how you want everything to work. Here's how you want it to look. And then you'll be able to meet that on a computer. And it will less be about this kind of translation layer of like you and your team have a picture of what you'd want to build. And then
you have to really painstakingly labor-intensive, like, layout that into a format that a computer can then execute and interpret. And so, yeah, I think, you know, less is less than the UI side of things. Maybe taste is a little bit of a misnomer, but just about having the right idea for what should be built. Awesome. Okay. I'm going to come back to these topics, but I want to actually zoom us back out to the beginnings of Cursor. I have never heard the origin story. I don't think many people know how this whole thing started.
Basically, you guys are building one of the fastest growing products in the history of the world. It's changing the way people build products. It's changing careers, professions. It's changing so much. How did it all begin? Any memorable moments along the journey of the early days? CourierShare kind of started as a solution in search of a problem. And a little bit where it very much came from reflecting on how AI was going to get better over the course of the next 10 years. And
There were kind of two defining moments. One was being really excited by using the first beta version of the GitHub Copilot, actually. This was the first time we had used an AI product that was really, really, really useful and was actually just useful at all and wasn't just a vaporware kind of demo thing. And in addition to being the first AI product that we'd use that was useful,
CodePilot was also one of the most useful, if not the most useful dev tool we'd ever adopted. And that got us really excited. Another moment that got us really excited was the series of scaling on papers coming out of OpenAI and other places that showed that even if we had no new ideas, AI was going to get better and better just by pulling on simple levers like scaling up the models and also scaling up the data that was going into the models.
And so at the end of 2021, beginning of 2022, this got us excited about how AI products were now possible. This technology was going to mature into the future. And it felt like when we looked around, there were lots of people talking about making models,
And it felt like people weren't really picking an area of knowledge work and thinking about what it was going to look like as AI got better and better. And that set us on the path to kind of an idea generation exercise. It was like, how are these areas of knowledge work going to change in the future as this tech gets more mature? What is the end state of the work going to look like? How are the tools that we used to do that work going to change?
How are the models going to get, you know, need to get better to support changes in the work? And, you know, once scaling and pre-training ran out, like, how are you going to keep pushing forward technological capabilities? And the misstep at the beginning of First Air is we actually worked on, you know, we sort of did this whole grand exercise and we decided to work on
an area of knowledge works that we thought would be relatively uncompetitive and sleepy and boring. And no one would be looking at it because we thought, oh, coding's great. Coding's totally interchangeable as AI, but people are already doing that. And so there was a period of four months to begin with where we were actually working on a very different idea, which was helping to automate and augment mechanical engineering and building tools for mechanical engineers. There were problems from the get-go in that,
Me and my co-founders, we weren't mechanical engineers. We had friends who were mechanical engineers, but we were very much unfamiliar with the field. So there was a little bit of a blind man and the elephant problem from the get-go. There were problems around how would you actually take the models that exist today and make them useful for mechanical engineering? The way we netted out is you need to actually develop your own models from the get-go.
And the way we did that was tricky. And there's not a lot of data on the internet of 3D models of different tools and parts and the steps that I took to build up to those 3D models.
And then getting them from the sources that have them is also a tricky process too. But eventually what happened was we came to our senses, we realized we're not super excited about mechanical engineering. It's not the thing we want to dedicate our lives to. And we looked around and in the area of programming, it felt like despite a decent amount of time ensuing, not much has changed. And it felt like the people that were working on the space
maybe had a disconnect with us and it felt like they weren't being sufficiently ambitious about where everything was going to go in the future and how kind of all software creation was going to flow through these models. And that's what set us off on the path to building Kirsha. Okay, so interesting. Okay, so first of all, I love that there's this advice that you often hear of go after a boring industry because no one's going to be there and there's opportunity. And, you know, sometimes it works, but I love that in this journey, it's like, no, actually go after the hottest industry
most popular space, AI coding, app building, and it worked out. And the way you phrased it just now is you didn't see enough ambition potentially, but you thought there was more to be done. So it feels like that's an interesting lesson. Even if something looks like, okay, it's too late, there's GitHub copilots out there, some other products. If you notice that they're just not as ambitious as they could be or as you are, or you see almost a flaw in their approach, that there's still a big opportunity. Does that resonate? Yeah.
That totally resonates. I think it's a part of it is you need there to be like leapfrogs that can happen. You need there to be things that you can do. And I think the exciting thing about AI is in a bunch of places, and I think this is very much still true of our space and can talk about how we think about that and how we deal with that. But
I think that just the ceiling is really high. And yes, if you look around, probably even if you take the best tool in any of these fields, there should be a lot more that needs to be done over the next few years. And so having that space, having that high ceiling, I think is unique.
amongst areas of software, at least the degree to which it is high with AI. Let's come back to the ID question. So there's kind of a few routes you could have taken and other companies are doing different routes. So there's building an ID for engineers to work within and adding AI magic to it. There's another route of just a full AI agentic Devon sort of product. And then there's just like a model that is very good at coding and focusing on building the best possible coding model. What made you decide and see that the ID path was the best route?
The folks who were, from the get-go, working on just a model or working on end-to-end programming, I think they were trying to build something very different from us, which is me care about giving humans control over all the decisions in the end tool that they're building.
And I think those folks were very much thinking of a future where kind of end time the whole thing is done by AI. And maybe like the AI is making all the decisions too. And so one, there was kind of like a personal interest component. Two, I think that always we've tried to be intense realists about where the technology is today. Very, very, very excited about how AI is going to mature over the course of many decades. But I think that sometimes people
There's an instinct to see AI do magical things in one area and then kind of anthropomorphize these models and think, it's better than a smart person here, and so it must be better than a smart person there. But these things have massive issues. And from the very start, our product development process was really about dogfooding and using the tool intensely every day.
And we never wanted to ship anything that wasn't useful to us. And we had the benefit of doing that because we were the end users part of our product. And I think that that instills a realism in you around where the tech is right now. And so...
That definitely made us think that we need the humans to be in the driver's seat. The AI cannot do everything. We were also interested in giving humans that control too, for personal reasons. And so that gets you away from just, you're a model company that also gets you away from just kind of this end-to-end stuff without the human having control. And then the way you get to an IDE versus maybe a plugin to an existing coding environment is the belief that programming is going to flow through these models and the act of programming is going to change a lot over the course of the next few years.
And the extensibility that existing coding environments have is so, so, so limited. So if you think that the UI is going to change a lot, if you think that the form factor program is going to change a lot, you necessarily need to have control over the entire application. I know that you guys today have an IDE and that's probably the bias you have of this is maybe where the future is heading. But I'm just curious, do you think a big part of the future is also going to be
AI engineers that are just sitting in Slack and just doing things for you? Is that something that fits into Cursor one day? I think you'll want the ability to move between all of these things thoroughly, effortlessly. And sometimes I think you will want to have the thing kind of go spin off on its own for a while. And then I think you'll want the ability to pull in the AI's work and then work with it very, very, very quickly, right? And then maybe have it go spin off again.
And so these like kind of background versus foreground form factors, I think you want that all to work well in one place. And I think the background stuff, there's like a segment of programming that it's especially useful for, which is type of programming tasks where it's very easy to specify exactly what you want.
without much description and exactly what correctness looks like without much description. And often that's the bug fixes are kind of like the, are a great example of that, but it's definitely not all of programming.
So I think that what the IDE is will totally change over time. And our approach to having our own editor was premised on, it's going to have to evolve over time. And I think that that will both include, you can spin off things from different surface areas like Slack or your issue tracker or whatever it is. And I think that will also include the pane of glass that you're staring at is going to change a lot. And we just mostly think of an IDE as the place where you are building software.
I think something people don't talk enough about with talking about agents and all these AI engineers are going to be doing all stuff for you is basically we're all becoming engineering managers with a lot of reports that are just not that smart. And you have to do a lot of reviewing and approving and specifying. I guess thoughts on that. And is there anything you could do to make that easier? Because that sounds really hard. Like anyone that has a large team has had a large team being like, oh, my God, all these junior people just checking in with me doing stuff.
not high quality work over and over. It's just like, oh, what a life. It's going to suck. Maybe eventually one-on-ones with all of us. So many one-on-ones. Yeah. So the customers we've seen have most success with AI, I think are still fairly conservative about some of the ways in which
in which they use this stuff. And so I do think today that the most successful customers really lean on things like, you know, our next edit prediction, where we, you know, your coding is normal and we're predicting the next instance of actions you're going to do. And then they also really lean on like scoping down the stuff that you're going to hand off to the bot. And, you know, there's for a fixed percent of your time spent reviewing code, you could from an agent,
or from an AI overall, you could, you know, there's kind of two patterns. One is you could, you know, spend a bunch of time specifying things up front, the AI goes and works, and then you then go and review the AI's work, and then you're done. That's the whole task. Or you could really chop things up, right? So you can, you know, specify a little bit, AI writes something, review, specify a little bit, AI writes something, review. And that's kind of, you know, autocomplete's all in the way of that spectrum, right?
And still we see often the most successful people using these tools are chopping things up right now and keeping things careless. That sounds less terrible. I'm glad there's a solution here. I'm going to go back to you guys building Cursor for the first time. What was the point where you realized this is ready? What was kind of a moment of like, okay, I think this is time to put it out there and see what happens. So when we started building Cursor, we
We were fairly paranoid about spinning for a while without releasing to the world.
And so to begin with, too, we actually-- the first version of Cursor was hand-rolled. Now we use VS Code kind of as a base, like many browsers use Chromium as a base. And it had worked off of that. To begin with, we didn't and built the prototype of Cursor from scratch. And that involved a lot of work. We had to build our own-- there were a lot of things that go into a modern code editor, including
support for many different languages and navigation support for moving amongst the language, error tracking support for things. There's things like an integrated command line, the ability to use remote servers, the ability to connect to remote servers to view and run code. And so we kind of just went on this blitz of building things incredibly quickly, building kind of our own editor from scratch, and then also the AI components.
And it was after a couple of months that we just-- it was after maybe five weeks that we were living on the editor full time. And I had thrown away our previous editor, and we're using a new one. And then once it got to a point where we found it a bit useful, then we put it in other people's hands and had this very short beta period. And then we launched it out to the world within a couple of months from the first line of code. I think it was probably three months
And it was definitely a like, you know, let's just get this out to people and build in public quickly. The thing that took us by surprise is we thought we would be building for a couple hundred people for a long time. And, you know, from the get-go, there was kind of an immediate crush of interest and a lot of feedback too. And, you know, that was super helpful. We learned from that. And that's actually, you know, why we switched to being based off of VS Code instead of just, you know, this hand-rolled thing. A lot of that was motivated by kind of the initial user feedback.
And, you know, and then I've been iterating in public from there. I like how you understated the traction that you got. I think you guys went from zero dollars to 100 million ARR in like a year, year and a half or something like that, which is historic. What do you think was the key to the success of something like this? He's talking about dogfooding being a big part of it. Like you built it in three months. That's insane. What do you think was is the secret to your success?
The first version was not, you know, the three-month version wasn't very good. And so I think it's been, you know, a sustained paranoia about, you know, there are all of these ways in which this thing could get better. You know, the end goal is really to invent a very new form of programming that involves automating a lot of coding as we know today. And no matter, you know, where we are with Cursor, it feels like we're very, very far away from that end goal. And so there's
There's always a lot to do. But I think it's been kind of a lot of it hasn't been over rotated on kind of that initial push, but instead is like the continued evolution of the tool and just making the tool consistently better. Was there an inflection point after those three months where things just started to really take off?
To be honest, it felt fairly slow to begin with. And maybe it comes from some impatience on our part. But I think there's the overall speed of the growth, which continues to take us by surprise. I think one of the things that has been most surprising too is that the growth has been fairly just consistent on an exponential of just consistent month-over-month growth.
accelerated at times by launches on our part and other things. But an exponential, to begin with, feels fairly slow when the numbers are really low. And so it didn't really feel off to the races to begin with. TOBIAS BORGERDING: To me, this sounds like build it and they will come actually working. You guys just built an awesome product that you loved yourselves as engineers. You put it out. People just loved it, told everyone about it. DAVID BASZUCKI: It being essentially all just us
you know, the team working on the product and making the product good in lieu of, you know, other things one could spend one's time on. You know, we definitely spent time on tons of other things. For instance, building the team was incredibly important. And, you know,
doing things like support rotations are very important. But some of the normal things that people would maybe reach for in building the company early on, we really let those fires burn for a long time, especially when it came to things like sales and marketing. And so just working on the product and building a product that you like, your team likes, and then also then adjusting it for some set of users, that can kind of sound simple. But then it's hard to do that well. And
And there are a bunch of different directions one could run, a bunch of different product directions. And I think that one of the difficult things, I think focus and strategically picking the right things to build and prioritizing effectively is tricky. I think another thing that's tricky about this domain is it's kind of a new form of product building.
where it's very interdisciplinary in that we are something in between a normal software company and then in between a normal software company and then a foundation model company in that we want to develop a, we're developing a product for millions of people and that side of things has to be excellent. Then also one important dimension of product quality is doing more and more on the science and doing more and more on the model side of things in places where it makes sense.
And so that element of things doing that well too has been tricky. But yeah, you know, the overall thing would notice, you know, maybe, you know, some of these things sound, it sounds simple to specify, but I'm like doing them well is hard and they're rough to always run it.
I'm excited to have Andrew Luo joining us today. Andrew is CEO of One Schema, one of our longtime podcast sponsors. Welcome, Andrew. Thanks for having me, Lenny. Great to be here. So what is new with One Schema? I know that you work with some of my favorite companies like Ramp and Vanta and Watershed. I heard you guys launched a new data intake product that automates the hours of manual work that teams spend importing and mapping and integrating CSV and Excel files. Yes. So we just launched the 2.0 of One Schema file feeds.
We've rebuilt it from the ground up with AI. We saw so many customers coming to us with teams of data engineers that struggled with the manual work required to clean messy spreadsheets.
FileFeeds 2.0 allows non-technical teams to automate the process of transforming CSV and Excel files with just a simple prompt. We support all of the trickiest file integrations, SFTP, S3, and even email. I can tell you that if my team had to build integrations like this, how nice would it be to take this off our roadmap and instead use something like One Schema?
Absolutely, Lenny. We've heard so many horror stories of outages from even just a single bad record in transactions, employee files, purchase orders, you name it. Debugging these issues is often like finding a needle in a haystack. One schema stops any bad data from entering your system and automatically validates your files, generating error reports with the exact issues in all bad files.
I know that importing incorrect data can cause all kinds of pain for your customers and quickly lose their trust. Andrew, thank you so much for joining me. If you want to learn more, head on over to oneschema.co. That's oneschema.co. What is the most counterintuitive thing you've learned so far about building Cursor, building AI products? I think one thing that's been counterintuitive for us, hinted at, added a little bit before, but is we definitely didn't expect to be doing any of our own model development when we started.
As mentioned, when we got into this, there were companies that were immediately from the get-go going and just focusing on training them all from scratch. And
We had done the calculation for it to train GP4 and just knew that that was not consistently what we were going to be able to do. And it also felt a little bit like focusing one's attention in the wrong area because there are lots of amazing models out there. And why do all this work to replicate kind of what other players have done, especially in the pre-training side of things? You know, taking a neural network that knows nothing and then teaching it the whole internet. Yeah.
And so we thought we weren't going to be doing that at all. And it seemed clear to us from the start that the existing models, there were lots of things that they could be doing for us that they weren't doing because there wasn't the right tool built for them. In fact, though, we do a ton of model development. And internally, it's a big focus for us on the hiring front and have assembled a fantastic team there.
And it's also been a big win on the product quality side of things for us. And at this point, every magic moment in Cursor involves a custom model in some way. And so that was definitely counterintuitive and surprising. And it's been a gradual thing where there was an initial use case for training our own model where it really didn't make sense to use any of the biggest foundation models. That was incredibly successful, kind of moved to another use case that worked really well and had been going from there. And one of the...
The helpful things in doing this sort of model development is picking your spots carefully, not trying to reinvent the wheel, not trying to focus on places maybe where the best foundation models are excellent, but instead kind of focusing on their weaknesses and how you can complement them.
I think this is going to be surprising to a lot of people hearing that you have your own models. When people talk about Cursor and all the folks in the space, they would kind of call them GPT wrappers. They're just sitting on top of ChatGPT or Sonnet. And what you're saying is that you have your own models. Talk about just the stack behind the scenes. Yeah, of course. So we definitely use the biggest generation models a bunch of different ways. The really important components of bringing the Cursor experience to people. The places where we use our own models. So
Sometimes it's to serve a use case that a foundation model wouldn't be able to serve at all for cost or speed reasons.
And so one example of that is the autocomplete side of things. And so this can be a little bit tricky for people who don't code to understand, but code is this weird form of work where sometimes really the next 5, 10, 20, 30 minutes of your work is entirely creatable from looking over your shoulder. And I would contrast this with writing. So writing, lots of people are familiar with Gmail's autocomplete and
kind of the different forms of auto-complete that show up when you're trying to post text messages or emails or things like that. They can only be so helpful because often it's just really not clear what you're going to be writing just by looking at what you've written before. But in code, sometimes when you edit a part of a code base, it's just you're going to need to change things in other parts of a code base. And it's entirely clear how you're going to need to change things. And so one core part of Cursor is this really suited auto-complete experience where you predict the next set of things you're going to be doing across multiple files, across multiple places within a file.
And making models good at that use case, one, there's a speed component of those models need to be really fast. They need to give you a completion within 300 milliseconds. There's also this cost component of we're running tons and tons and tons of molecules. Every keystroke, we need to be changing our prediction for what you're going to do next. And then it's also this really specialty use case of you need models that are really good not at completing the next token, just like a generic text sequence, but are really good at auto-completing a series of diffs
you know, looking at what's changed within a code base and then correcting the next set of things that are going to change, you know, both deleted and added and all of that. And we found a ton of success in training models specifically for that task. So that's a place where, you know, no foundation models are involved. It's kind of our own thing. We don't have a lot of labeling or branding about this in the app, but that, you know, powers are very core part of Cursor.
And then another set of places where we're using our models are to help things like Sonnet or Gemini or GPT. And those sit both on the input of those big models and on the output. On the input side of things, those models are searching throughout a codebase, trying to figure out the parts of a codebase to show to one of these big models. You can kind of think about this as like a mini Google search that's specifically built for finding the relevant parts of a codebase to show one of these big models.
And then on the output side of things, you know, we take the sketches of the changes that these models are suggesting you make with that code base. And then, you know, we have models that then kind of fill in the details of like, you know, the high level thinking is done by the smartest models. They spend a few tokens on doing that. And then these smaller specialty, incredibly fast models coupled with some inference tricks, then take those high level changes and turn them actually into full code diffs.
And so it's been super helpful for pushing on quality in places where you need a specialty task. And it's been super helpful for pushing on speed, which is such an important dimension of product quality for us, too. This is so interesting. I just had Kevin Wheel on the podcast, CPO of OpenAI, and he calls this the ensemble of models. That's the same way they work to use the best feature of each one. And to your point, the cost advantages of using cheaper models.
These other models, are they based on Lama and things like that, just open source models that you guys plug into and build on? Yeah, so again, we try to be very pragmatic about the place that we're going to do this work, and we don't want to reinvent the wheel. And so starting from the very best pre-trained models that exist out there, often open source ones,
you know, sometimes in collaboration with these big model providers that don't share their weights out into the world. Because the thing we care about less is, you know, the ability to read line by line, you know, the matrix of weights that then, you know, go to give you a certain output. And it's, you know, we just care about the ability to kind of to train these things, to post-train them. And so by and large, by and large, yes, open source models, you know, sometimes work
working with the closed-source providers to do things. This leads to a discussion that a lot of AI founders always think about in investors, which is moats and defensibility in AI. So it feels like one is custom models, is a moat in the space. How do you just think about long-term defensibility in the space, knowing there's other folks, as you said, launching constantly, trying to eat your lunch? I think that there are ways to build in inertia and
traditional modes, but I think by and large, we're in a space where it is incumbent on us to continue to try to build the best thing and everyone in this industry. And I truly just think that the ceiling is so high that no matter what entrenchment you build, you can be leapfrogged.
And I think that this resembles markets that are maybe a little bit different from normal software markets, normal enterprise markets of the past. You know, I think one that comes to mind is the market for search engines at the end of 1999, or, you know, at the end of the 90s and beginning of the 2000s. I think another market that comes to mind that resembles this market in many ways, it's actually just like the development of the personal computer and many computers, you know, in the 70s, 80s, 90s.
And I think that, yes, in each of those markets, the ceiling was incredibly high. It was possible to switch. You could keep getting value for the incremental hour of a smart person's time, the incremental R&D dollar for a really long time. You wouldn't run out of useful things to build. And then in search in particular, not only computer case,
having distribution was helpful for making the product better too, in that you could tune the algorithms, you could tune the learning based off of the data and the feedback you're getting from users. And I think that all of those dynamics exist in our market too. And so I think that maybe the sad truth for people like us, but then the amazing truth for the world is, I think that there are many leaf drives that exist. There's many more useful things to build. We're a long way away from where we can be in five, 10 years. And
It's kind of incumbent on our state to keep that engine going.
So what I'm hearing is it sounds like a lot more like a consumer sort of moat where it's just be the best thing consistently so that people stick with you versus creating lock-in and things like that where they're just for like Salesforce where it's just contract with the entire company and you have to use this product. Yeah, and I think the important thing to note is, you know, if you're in a space where like you kind of run out of useful things to do very quickly, then that's, you know, that's not a great situation to be in. But if you're in a place where, you know, big investments and, you know,
Having more and more great people working on the right path can keep giving you value. Then you can get these economies of scale of R&D and you can deeply work on the technology in the right direction and get to a place where that is defensible. But yes, I think there's a consumer-like tendency to it and I really think it's just about building the best thing possible.
Do you think in the future there's one winner in this space or do you think it's going to be a world of a number of products like this? I think the market is just so very big. And this is also one thing that, you know, you asked about the IDE thing early on. And one thing that I think a trip of some people that were thinking about the space is like they looked at the IDE market of the past 10 years and they said, you know, who's making money off of editors? Like, you know, there's all these...
It's this super fragmented space where everyone has their own thing with their own configuration. There's one company that commercially actually makes money off of making great editors, but that company is only so big. Then the conclusion was it was going to look like that in the future. I think that the thing that people missed was that there was only so much you could do building an editor in the 2010s for coders.
You know, the company that made money off of editors was doing things like making it easy to navigate around a code base and, you know, doing some error checking and type checking for things and, you know, having good debugging tools, but like, which were all very useful. But I think that the set of things you can build for programmers, I think the set of things you can build for knowledge workers in many different areas are
just goes very far and very deep. And I think that really the problem in front of all of us is the automation of a lot of busy work and knowledge work and really changing all the areas of knowledge work in front of us to be much more reliable and more productive.
So that was all a long-winded way to say, I think the market's really, really big that we're in. I think it's much bigger than people have realized, than the other building tools for developers in the past. And I think that there will be a bunch of different solutions. I think that there will be one company, and to be determined if it's going to be us, but I do think that there will be one company that builds the general tool that builds almost all the world's software. And that will be a very, very generationally big business.
But I think that there will be niches you can occupy in doing something for a particular segment of the market or for a very particular part of the software development lifecycle. But the general programming shifts from just writing formal programming languages to something way higher level, this is the application you purchase and use to do that. I think that there will be generally one winner there and it will be a very big business.
Juicy. Along those lines, it's interesting that Microsoft was actually like right at this set, like at the center of this first with an amazing product, amazing distribution. Copilot, you said, was like the thing that got you over the hump of like, wow, there could be something really big here. And it doesn't feel like they're winning. It feels like they're falling behind. What do you think happened there? I think that there are like specific historical reasons why...
Copilot might not have lived up so far, have lived up to the expectations that some people have for it. And then I think that there are structural reasons. I think the structural reason is, and to be clear, Microsoft, in the Copilot case, obviously a big inspiration for our work. And in general, I think they do lots of awesome things and we're users of many Microsoft products. But I think that this is a market that's not super friendly to incumbents.
in that a market that's friendly to incumbents might be one where there's only so much to do, it kind of gets commoditized fairly quickly, and you can bundle that in with other products. And where the ROI between different products is quite small. And in that case, perhaps it doesn't make sense to buy the innovative solution. It makes sense to just kind of buy the thing that's bundled in with other stuff. Another market that might be particularly helpful for incumbents
is one where there's, you know, from the get-go, it's just like you have your stuff in one place and it's like really, really excruciatingly hard to switch. And, you know, for better or for worse, I think in our case, you can try out different tools and you can decide which product you think is better. And so that's not super friendly to June comments and that's more friendly to whoever you think is going to have the most innovative product. And then the specific historical reasons, like as I understand them, are the group of people that worked on the first version of Copilot
have by and large gone on to do other things at other places. I think it's been a little hard to kind of coordinate among all the different departments and parties that might be involved in making something like this. I want to come back to Cursor. A question I like to ask everyone that's building a tool like this, if you could sit next to every new user that uses Cursor for the first time, just whisper a couple of tips in their ear to be more successful, most successful with Cursor, what would be like one or two tips?
I think right now, and we'd want to fix this at a product level, a lot of being successful with Kersher is kind of having a taste for like what the models can do, both what complexity of a task they can handle and like kind of how much you need to specify, you know, things to that model. But like having a taste for the quality of the model and where its gaps exist and what it can do and what it can't.
And right now, we don't do a good job on the product of educating people around that and maybe giving people some swim lanes, giving people some guidelines. But so to develop that taste, we'd give kind of two tips. So one is, as mentioned before, we'd bias less toward like, hey, try and have the model like you.
trying in one go to tell the model, hey, here's exactly what I want you to do, then seeing the output and then either being disappointed or accepting the entire thing for an entire big task. Instead, what I would do is I would chop things up into bits and you can spend basically the same amount of time specifying things overall, but chopped up more. So you're specifying a little bit, you're getting a little bit of work, you're specifying a little bit, getting a little bit of work and not doing as much the like, let's write a giant thing, telling model exactly what to do. I think that will be a little bit of a recipe for disaster right now.
And so, by saying toward chopping things up, at the same time, and it might make sense to do this on a side project and not on your professional work, I would encourage people to especially
developers who are kind of used to existing workflows for building software, I would encourage people to explicitly try to fall on their face and try to discover the limits of what these models can do by being ambitious in kind of a safe environment, like perhaps a side project, and trying to kind of go on to give this AI to the fullest. Because sometimes we do run, or a lot of the time we run into people who haven't given the AI yet a fair shake
and are kind of underestimating its abilities. So generally biasing towards chopping things up and making things smaller, but to discover the limits of what you can do there, explicitly just kind of try to go for broke in a safe environment and get a taste for it. You might be surprised in some of the places where the model doesn't break. What I'm essentially hearing is kind of build a gut feeling of what the model can do and how far it can take an idea versus just kind of guiding it along.
And I bet that you need to rebuild this gut every time there's a new model launch. Like when it's on it, I don't know, 4.0 comes out, you have to kind of do this again. Is that generally right? Yes. It's not...
For the past few years, it hasn't been as big as I think the first kind of experience people have had with some of these big models. But yeah, this is also a problem we would hope to solve much better just for users and take the burden off of them. But yeah, each of these things have slightly different quirks and different personalities.
Kind of along these lines, something that people are always debating, tools like Cursor, are they more helpful to junior engineers or are they more helpful to senior engineers? Do they make senior engineers 10x better? Do they make junior engineers more like senior engineers? Who do you think benefits most today from Cursor? I think across the board, both of these cohorts benefit in big ways. It's a little hard to say on the relative ranking. I will say they fall into different anti-patterns.
So the junior engineers we see going a little too wholesale, relying on AI for everything. And we're not yet in a place where you can kind of do that end-to-end on a professional tool, working with tens, hundreds of other people within a long-lived code base. And then the senior engineers, for many folks, it's not true for all. And we actually...
Often, one of the ways these tools are adopted is there's developer experience teams within companies. Often those are staffed by incredibly senior people because often those are people who are building tools to make the rest of the engineers within an organization more productive. And we've seen some very, very boundary-pushing kind of tools
Yeah, we've seen people who are on the front lines of really trying to adopt the technology as much as possible there. But by and large, I would say, on average, as a group, the senior engineers underrate what AI can do for them and stick to their existing workflows. And so the relative ranking is a little hard. I think they fall into different anti-patterns. But they both, by and large, get big benefits from these tools. That makes absolute sense. Yeah.
I love that it's like two ends of the spectrum, like expect too much, don't expect enough. And it's like the three bears. Is that the allegory? Yeah. Yeah. Okay. Yeah. Maybe the sort of senior but not staff, you know, right in the middle. Interesting. Okay. Just a couple more questions. Yeah.
What's something that you wish you knew before you got into this role? If you could go back to Michael at the beginning of Cursor, which was not that long ago, and you could give him some advice, what's something that you would tell him? The tough thing with this is it feels like so much of the hard-won knowledge is tacit and a bit hard to communicate verbally. And...
The sad fact of life feels like for some areas of human endeavor, you kind of do need to fall on your face to either need to fall on your face to learn the correct thing or you need to be kind of around someone, which is a great example of kind of excellence in the thing.
And one area where we have felt this is hiring. I think that we actually were... So we tried to be incredibly patient on the hiring front.
It was really important to us that both for personal reasons and also for, I think, actually for the company's strategy, having a world-class group of engineers and researchers to work on Cursor with us was going to be incredibly important. Also getting people who fit a sort of mix of intellectual curiosity and experimentation because there can be so many new things we need to build.
And then also kind of an intellectual honesty and maybe micro-pessimism and bluntness because, you know, with all the noise and, you know, especially as the company's grown and the business has grown, you know, keeping a level head, I think, is incredibly important too. But getting the right group of people into the company, you know, was, you know, the thing that maybe more than anything else, apart from building the product, we really, really...
uh, you know, fussed over. And, uh, you know, I, we actually waited a long time to grow the team because of that. And I think that most, you know, many people you hear hired too fast. I think we actually hired too slow to begin with. I think it could have been remedied. I think we could have been better at it. And, um,
you know, the method of recruiting that we ended up eventually falling into and working really well for us, which isn't that novel of like going after people that we think are really world-class and like recruiting them over the course of, in some cases, many years.
I ended up working for us in the end, but I don't think we were very good at it to begin with. And so I think that there were hard-won lessons around both who was the right profile, like who actually meets us on the team, like what did greatness look like? And then how to, you know, talk with someone about the opportunity and, you know, get them excited if they really weren't looking for anything. There were lots of kind of learnings there about how to do that well. And that took us a bit of time.
What are some of those learnings for folks that are, you know, hiring right now? What's something you missed or, or learned? I think, you know, to start with, uh, we maybe, we actually biased a little bit too much towards, um, looking for people who fit the archetype of well-known school, very young had done the things that were like, you know, high credential, um, in those well-known school environments. And, um,
And actually, like, you know, I think we're lucky early on to find a lot of, you know, to find fantastic people who are willing to, you know, to do this with us who were later career. And so, yeah, I think we kind of spent a bunch of time on maybe a little bit of the wrong profile to begin with. And part of that was a seniority thing. Part of that was like, you know, kind of an interest and experience thing, too. We have hired people who are excellent, excellent, excellent and very young.
but they maybe look in some cases slightly different from being straight out of central casting. Another lesson is just like we very much evolved our interview loop. And so now we have a hand-rolled set of interview questions and then kind of core to how we interview too is actually we have people on site for two days and do a project with us, a work test project.
And that has worked really well, but increasingly refining that. And then, yeah, I think how to learn about what people are interested in and put our best foot forward and letting them know about the opportunity when they're really not looking for anything and have those conversations. There's definitely been, gotten better at that over time. Do you have a favorite interview question that you like to ask? I think this two-day work test.
which we thought would not scale past a few people has been, has had surprising staying power. And the great thing about it is it lets someone go end to end on it, like a real project. It's not, you know, work that we use as kind of a canned list of projects, but it gives you two days of seeing like a real work product.
And it doesn't have to be incredibly time-intensive on the team's time. You can take the time you would spend in like a half day or one day onsite and you kind of spread it out over those two days and give someone a lot of time to do work on their projects. And so that can actually help it scale. And then it really helps you, it helps to enforce
do you want to be around this person type test because you are around this person for two days and so a bunch of meals with them. And so that one, we didn't expect that one to stick around, but that has been really, really important to our value-added process. And then also important to getting people excited, especially at the very early stages of the company, because before people are using the product and know about it,
And, you know, when the product is comparatively like not very good, really the only thing you have going for you is, you know, a team of people that, you know, some people find special and want to be around. And, you know, the two days would give us a chance to just like, you know, have this person meet us and in some cases hopefully get convinced that they want to throw in with us.
And so, yeah, that one was unexpected. Not exactly an interview question, but kind of like a forward interview. The ultimate interview question. So just to be very clear about what you're describing, you give them an assignment like build this feature in our actual code base, work with the team to code it and ship it. Is that roughly right? Yes. So we don't use the IP, not shift end to end, but yeah, it's like a mock. Very often in our code base, here's a real mini two-day project
you're going to do it end-to-end, largely being left alone. There's collaboration too. And then we're a pretty in-person company. So in almost all cases, yeah, it's actually just sitting in office with us too. And you've been saying that this has scaled to even today. So how big are you guys at this point? So we are going on 60 people. So small for the scale and impact. I was thinking it'd be a lot larger than that. Yeah. And I imagine the largest percentage is engineers. Yeah.
Yeah, the thing that's more than anything, and to be clear, you know, a big part of the work ahead of us is building a group of people that is bigger and awesome and can continue to make the product better and the service we give to customers better. And so you don't plan to stay that small for longer when it hopes out.
But yeah, part of the reason that that number is small is the percentage of engineering and research and design is very high within the company. And so many software companies, when they have roughly 40 engineers, would be over a hundred people because there's lots of operational work. And often they're very, very sales-led from the get-go. And that's just quite labor-intensive. And we started from a place of being incredibly lean and product-led and
And like, you know, we now serve lots of our good customers and it built that out. But, you know, there's much more to do that. A question I wanted to ask you. There's so much happening in AI. There's things launching every day. There's like newsletters, like many newsletters whose entire function is to tell you what is happening in AI every single day. Running a company that's at the center, kind of the white hot center of the space. How do you stay focused and how do you help your team stay focused and heads down and just build and not get distracted by all these shiny things?
I think hiring is a big part of it. And if you get people with the right attitude,
And, you know, all of this should be asterisked in, like, you know, I think we're doing well there. I think that, like, you know, we'd probably be doing better there too. And, you know, it's something that we should probably talk even more about as a company. But I think that, you know, hiring people with the right disposition, you know, people who are less focused on external validation, more focused on building something really great, more focused on doing really high quality work, and people who are just generally kind of level-headed and
maybe like the highs aren't very high and lows aren't very low. I think hiring can get you through a lot here. And I think that's actually like, you know, a learning throughout the company is that, you know, for any, you need process, you need hierarchy, you need lots of things. But for any kind of organizational tool that you're introducing into a company, you know,
And the result you're looking to get from that tool, also, you can go pretty far on hiring people with the right behaviors that you want to resolve from that organizational thing. And the specific example that comes to mind is we've been able to get away with not a ton of process yet on the engineering front. And I think we need a little bit more process, but for our size, not a ton of process by hiring people who I think are really excellent.
One is hiring people who are level-headed. I think two is just talking about it a lot. I think three is hopefully leading by example. And yeah, for us personally, we've...
since 2021, 2022, been professionally working on this and working on AI. And we've just seen a sea change of the comings and goings of various technologies and ideas of if you're to transport yourself back to end of 2021, beginning of 2022, this is GPT-3, and StructGPT doesn't exist. There's no DALI, there's no stable diffusion. And then we've gone through all of those image technologies existing, StructGPT and that rise,
And G4, all of these new models, all these different modalities, all the video stuff. And only a very small number of these things really kind of affect the business. So I think we've kind of just built up a little bit of an immune system and kind of know when an event comes around that actually is really going to matter for us. And this dynamic, too, of there being lots and lots and lots of chatter, but then maybe only a few things that really matter,
I think has been mirrored in AI over the last decade where there have been so many papers on deep learning in academia, so many papers on AI in academia. Then the amazing thing is there are really a lot of, I mean,
A lot of the progress of AI can be attributed to some very simple elegant ideas that have stayed around. And the vast majority of ideas that have been put out there haven't had staying power and haven't mattered a ton. And so the dynamic is a little bit mirrored in kind of the evolution of deep learning as a field overall. Last question. What do you think people still most misunderstand or maybe don't fully grasp about where things are heading with AI in building and the way the world will change?
People are still a little bit, you know, occupied too much either end of a spectrum of, you know, it's all going to happen very fast. And, you know, this is all, you know, bluster and type and snake well. And, you know, I think we're in the middle of a technology shift that's going to be incredibly consequential. It's going to be more consequential than the internet. And it's going to be more consequential than, you know, any shift in tech that we've seen since the advent of computers. Yeah.
And I think it's going to take a while. And I think it's going to be a multi-decade thing. And I think many different groups will be consequential in pushing it forward. And to get to a world where
computers can increasingly do more and more and more for us. There's all of these independent problems that need to be knocked down and progress needs to be made on them. And some of those are on the science side of things of getting these models to understand different types of data, be faster, cheaper, smarter, conform to the modalities that we care about, take actions in the real world.
And then some of it's on how we're going to work with them. What's the experience a human should actually be seeing and controlling on a computer and working with these things? But I think it's going to take decades. I think that there's going to be lots of amazing work to do. I think that also one of the most, like a pattern of a group that I think will be especially important here, not to talk our own book, but I think is like the company that works on
Automating and augmenting a particular area of knowledge work builds the technology under the surface for that.
integrating the best parts from providers, sometimes doing it in-house, and then also builds the product experience for that. I think people who do that-- and we're trying to do it in software, people who do that in other areas-- I think those folks will be really, really, really consequential, not just for the end value that users see. But then I think as they get to scale, they'll be really important for pushing forward the technology. Because I think they'll be able to build-- the most successful of them will be able to build very, very big businesses.
And yeah, so excited to see the rise of, you know, other companies like that in other areas. I know you guys are hiring for folks that are interested in, hey, I want to go work here and build this sort of stuff. What kind of roles are you looking for right now? Anyone specifically you're trying to, any roles you're most excited about filling ASAP? What should people know if they're curious? There are so many things that this group of people need to do that like we are not yet equipped to do. And so, you know,
kind of generic across the board, first of all. And so if you don't think we have a role for something, maybe if you reach out that that won't actually be the case. And maybe we can actually learn from you and kind of decide that we need something that we weren't yet aware of. But by and large, I think that two of the most important things for us to do this year are have the best product in the space and then grow it. And we're kind of in this land grab mode where
almost everyone in the world is either using no tool like ours or they're using one that's maybe developing less quickly. And so growing Curse for Cheeto is a big goal. And I would say, yeah, especially always on the hunt for folks who are excellent engineers, designers, researchers, but then focusing all across the business side too.
I can't help but ask this question now that you talk about engineers. There's kind of this question of just like, you know, code's going to write up all our code. AI is going to write all our code, but everyone's still hiring engineers like crazy. All the foundational models, so many. We're not out there cheating the, you know, the horn of, uh, do you think there's going to be an inflection point of like engineering roles start to kind of slow down? Uh, I know this is like a big question, but just it's, do you see engineers being more and more needed across all these companies? Or do you think at some point,
there's all these cursor agents running and building for us. Again, we kind of have the view that there's this both long messy middle of it not jumping to a just like you step back and you ask for all your stuff to be done and you have your engineering department. And very much you want to evolve from programming as it exists today. We want humans to be in the driver's seat
And we think even in the end state, giving folks control over everything is really important. You will need professionals to do that and kind of decide what the software looks like. So both, I think that yes, engineers are definitely needed. I think that engineers will be able to do much more. I think the demand for software is very lasting, which is not the most novel thing, but I think it's
It's kind of crazy to think about how expensive and labor intensive it is to build things that are pretty simple and easy to specify, or it would look like it to the outside observer. And, you know, just how hard those things are to do right now. And so if you can, you know, all of the stuff that exists right now, that's, you know, justified by,
the cost and demand that we have now, if you could bring that down by order of magnitude, I think you'd have tons and tons and tons of more stuff that we could do on our computers, tons more tools. And I felt this where one of my early jobs actually was working for a biotechnology company and it was building internal tools for them. And the off-the-shelf tools that existed were horrible and did not fit their use case at all. And then the internal tools I was building
there was definitely a ton of demand there for things that could be built. And, you know, that far outstripped just the things that I could build in the time that I was with them. But yes, I think that it's still so, you know,
The physics of working on computers are so great. You should be able to basically just move everything around, do everything that you want to do. There's still so much friction. I think that there's much more demand for software than what we can build today with things costing like a blockbuster movie to make simple productivity software. And so I think long into the future, yes, there will actually be more demand for engineers.
Is there anything that we didn't cover that you wanted to mention any last negative wisdom you wanted to leave listeners with? You could also say no, because we've done a lot. We think a lot about how you set up a team to be able to make new stuff in addition to like continuing to improve the stuff that you have right now. And I think if we're to be successful, like, yeah, IDE is going to have to change a ton. What it looks like is going to have to change a ton going into the future, it's
And, you know, if you look around the companies we respect, there are definitely examples of companies that have continued to really ride the wave of many leapfrogs and continue to kind of actually push the frontier. But, you know, they're kind of rare, too. Like, it's a hard thing to do.
And so, you know, part of that is just kind of thinking about the thing and trying to reflect on it, you know, in our day-to-day and, you know, the first principle side of things. Part of it is also, you know, trying to get in and study past examples of greatness here.
And, you know, that's something that we think about a lot too. Yeah, what you just told is before we started recording, you had all these books behind you. And I was like, what's that over there? It's like the history of some old computer company that was influential in a lot of ways that I've never heard of. And I think that says a lot about you of where a lot of this innovation comes from is studying the past and studying history and what's worked and what hasn't.
Okay. Where can folks find you online if they want to reach out and maybe apply? You said that there may be roles they may not even be aware of. Where do they go find that? And then how can listeners be useful to you? Yeah. If folks are interested in working on this stuff, would love to speak. And they can find, if they go to kris.com, they can kind of both find the product and find out how to reach us. Easy. Michael, thank you so much for being here. This was incredible. It was wonderful. Thank you. Bye, everyone.
Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lennyspodcast.com. See you in the next episode.