We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Revolution: Why This Is The Best Time To Start A Startup

AI Revolution: Why This Is The Best Time To Start A Startup

2025/1/24
logo of podcast Lightcone Podcast

Lightcone Podcast

AI Deep Dive AI Chapters Transcript
People
H
Harjit
P
Paul Buchheit
Topics
Paul Buchheit: 人工智能技术发展迅速,有两条道路:一条是限制人类自由,另一条是最大化人类的能动性和自由,帮助我们成为更好的自己。我认为我们正朝着好的方向发展。现在是创业的最佳时机,因为AI使得许多以前不可行的商业模式成为可能,它拓展了商业模式的可能性。 Paul Buchheit: AI时代,快速迭代仍然是成功的关键。AI能够创造巨大的财富,而不是简单的取代人类的工作。未来可能存在“机器货币”和“人类货币”两种经济模式,前者负责满足基本需求,后者则体现人类的独特价值。AI可以帮助弥合社会差距,让更多人获得更好的生活和机会。 Harjit: 自从去年夏天以来,YC的AI创业公司平均周增长率达到惊人的10%。AI创业公司的发展速度比以往任何时候都快,许多公司在6个月内就能达到百万美元的年经常性收入(ARR)。AI极大地提升了创业公司的野心和执行力。目前增长迅速的AI创业公司大多是为企业提供AI代理服务的公司,这得益于企业对AI的巨大需求。AI创业公司的成功在于快速开发出真正能满足市场需求的产品,而不是依赖于强大的销售能力。高质量的评估数据集(eval set)对于AI产品的成功至关重要,甚至比代码本身更重要。AI时代,拥有能动性的人将占据优势地位。熟练使用代码生成工具已经成为AI时代工程师招聘的重要标准。AI正在改变SaaS行业,一些公司甚至不再购买新的SaaS工具,而是使用代码生成工具来替代。AI可以显著降低客户支持成本,并提升公司盈利能力。AI时代,公司可以利用更少的员工实现更高的营收。AI时代,创业公司更注重效率和杠杆作用,而不是盲目扩张。基于用量的定价模式更易于被客户接受,因为它能更直观地体现产品的投资回报率。AI创业公司的竞争优势在于快速迭代、拥有独特的数据和品牌,以及对客户的重视。AI降低了创业的门槛,让更多人有机会创造出价值巨大的公司。

Deep Dive

Chapters
Current market trends show unprecedented demand and growth in AI startups. Founders are setting ambitious goals and achieving them faster than ever before, with some companies reaching millions in annual recurring revenue within months. This growth is attributed to the high demand for AI-powered solutions in the business world.
  • Unprecedented demand for AI products
  • Startups achieving 10% week-on-week growth
  • Companies hitting $1 million ARR within six months
  • Increased ambition among founders due to AI's success

Shownotes Transcript

Translations:
中文

With AI, there's sort of two forks on the road. There's the bad direction and there's the good direction. And the good path, which I think we're moving towards, is looking to say, how do we maximize human agency and freedom and our potential to be the best versions of ourselves? This is the first time no one's saying no. Everyone is saying yes and more. There's just unprecedented amounts of demand for AI stuff.

There's a whole category of businesses or products that would not have been economically viable or even possible to create before that are now possible. And so we've actually just like expanded the universe of possible businesses. Never been a better time to be a founder, that's for sure.

Welcome back to another episode of The Light Cone. And we've got a special one today because we are in Sonoma and we just wrapped up a 300-person retreat of some of our top AI founders. And we also have a very special guest today, the creator of Gmail and our partner at YC, Paul Buchheit. Harjit.

Why is this such a special episode? What are we doing here? Well, we're filming from a different place. So this weekend we put on an AI retreat for some of our alumni companies to share ideas about AI and what they're seeing as they're building their startups. And we learned a bunch of really interesting stuff. So we thought we would film an episode to talk about it. So PV, back in the day when we were working with companies, you know, what was sort of an aspirational growth rate? What would we tell people to try to do week on week?

Well, 10% week-on-week is an amazing metric to hit. Yeah, and I think back then, if you were maybe the top one or two, maybe even the top one or two companies in the whole batch, you'd be able to achieve that. And since summer of last year, the wildest thing is realizing that both summer and fall batch, in aggregate, on average, over the batch in 12 weeks, averaged 10% week-on-week growth.

So not just the very best, the Airbnb of the batch, but the batch overall. It's amazing. And it's not just during the batch. Diana and Harj, you guys have companies that you've worked with that have continued an insane growth rate long after the batch is over. Do you want to talk about those ones? One of the ones that really stands out is a particular company that went from zero to 12 million.

in 12 months. I had never seen any growth like that. And I think we've seen this not to be just the exceptional different company of the batch, but actually more of them do that as well.

Yeah, that was my general pickup from this weekend was that just the rate of execution of startups is going much higher. And you can see it in both like how quickly companies are hitting like a million dollars in ARR. Like we used to, I think, say you should aim for that like 12 to 18 months out of the batch. And that's like the equivalent of the 10% week on week growth. Like that's what you should aspire to.

Now it feels to me like that's probably the minimum for an AI startup. I have companies hitting it within six months. And then I was just talking to some founders this weekend just about their goals for this year. Some of them have hit a million dollars ARR just now. I had one company say that their goal is 20. Another company say they're aiming for 10 at least. Just that.

Going from 1 to 20 million AR in one year. Yes, yeah. And this is a goal. Founders have goals and we hope they hit them. But my point is, I think saying that even a couple of years ago, hey, my goal is to go from 1 to 20. People would have thought you were just that. Yeah, you would have been either like, that's total nonsense. It's never going to happen. Or you just wouldn't have said it. And I just think the general level of ambition has gone way up because of AI. The things are starting to work. And let's talk about

Why that is the case. Does anyone have any thoughts on that? Well, I guess I have a meme that I showed you guys earlier. You know, I think the classic thing is you have a boss who's sort of like slave driving. And then I still believe this. Like if you're a leader, you're not, you know, sort of slave driving from the back. You're way in the front, like sort of leading. And then the meme has, you know, this one person pulling the cart alone. And that's the introvert.

And what might happen now is the introvert with AI can pull three times as many cards all alone, actually. Once intelligence is truly on tap, then it's actually a force multiplier for founders and people with really, really strong senses of agency. Why specifically is this happening? One of the interesting talks I heard was from Aaron Levy, the CEO of Box. And he was talking about he's been through multiple cycles of enterprise software, and

And he said that usually when there's sort of like a new cycle shift, but like cloud or mobile, there's always people in the room, decision makers at the big enterprise software companies saying like, no, like, you know, we're never going to shift to cloud. Like apparently like a famous quote from Jamie Dimon. Well, like whenever like mobile is not going to be a thing, it's not that important. But with AI, it's different. Like this is the first time no one's saying no, everyone is saying yes. And like more, like there's just like,

unprecedented amounts of demand for just AI stuff. Yeah, it's notable that all these companies that are having these incredible growth rates, they're all the same flavor of startup, right? They're all basically selling AI agents to businesses. I mean, there's other companies that were funded that are doing well, but like all of the ones that you guys were talking about, right, they're all AI agents for businesses. And so they're all essentially, I think, riding on this wave of like enterprises are like, have enormous pressure internally to adopt AI.

This seems like it goes back to our fundamental base advice, which is make something people want. And in this case, traditionally the challenge was convincing people that they wanted the product. And it sounds like what's driving the growth is that the demand is already there. And so you just have to show up with a product that works and you don't even have to be that good at sales. The point is that actually building the product that works is quite hard. A lot of what the

what the demand we're seeing is for software that can actually do the work of a person. So it's essentially services. And

doing that to the equivalent level of a human doing the job, whether it's like customer support, sales, phone calls, whatever it is, it's actually very, very hard. So I think just like a trend I noticed is a lot of our heavily technical CEOs who aren't necessarily the strongest at sales are able to win big enterprise contracts now because although there's 10 or 15 other companies competing for the same contract, it's very, very hard to build a product. And so just building the thing that actually works

does the work well is enough to win these huge deals. A lot of the details of how they build the products, they're really inventing a lot of new patterns on how to build the product because nobody knew how to really get LMs to behave, let's say, correctly and give very predictable results. And people thought that was impossible. That's because they only tried maybe surface level. They play with chat GPT and sometimes hallucinate and then people give up. That could be the

The random person just does that, but a lot of the technical founders, they don't. They find ways and wizardry around on how to really state a problem, how to properly prompt it to actually be very accurate. And it is possible because we're actually seeing a lot of these products getting

bought businesses to handle all these complex tasks. One thing I noticed this weekend is that a lot of the talks that the founders gave were around evals and testing, which I don't think that would have ever been true at a previous YC conference. Testing was sort of this afterthought thing that you try to do as little of as possible. I heard one really interesting

comment from a founder who's building an agent who said that he thinks that the most valuable thing that his company has built is not the code base. It's the eval set. It's a gold standard labeled set of data of like, this is what is the correct answer for the AI to do. And

That was sort of a mental change for me that like, there's I think this perception that would that like companies have like data assets, but just like general random data is actually not that valuable. The thing that's really valuable is like a gold standard, meticulously labeled eval set. I mean, this is exactly why the whole chat GPT wrapper meme is wrong in that.

Actually, it's the model that is changing very quickly. There are clearly five or more AI labs, all of which are right there at the frontier. So now there's a lot of alternatives to which model, but then the thing that

you know, nobody has that is actually hard to get is the eval set. And I'd argue the prompting, which is sort of like mirrors basically all of the opportunity for the people watching right now. It's like basically agency and taste prompting, just knowing what to tell someone to do or, you know, what to tell the agent to do. And then evals is taste. It's like, is it good? Is it beautiful? Is it useful?

I had one very interesting tidbit from a founder who said that his designer, they've actually stopped using Figma mock-up things and the workflow is interesting. So the designer is designing entirely with Claude and going from text to JavaScript. It was just counterintuitive to me because you assume design is this like very visual thing, but apparently their designer has figured, like it has enough taste to be able to turn that into just like text prompts and like,

via prompt engineering, essentially, get to actual lines of code that are as tasteful and as good as any Figma

mock-up would have been. So it's like the pattern, as always, is whoever can iterate the fastest wins. And AI is an incredible tool for rapid iteration. I guess people are, you know, sort of worried that the jobs are going to go away. But earlier we were talking about, there's this great Milton Friedman quote where he's visiting a developing country and he sees this large group of workers digging a canal using shovels. And he asks his government official host, you know,

Why are you not using machinery? What's going on? And the guy says, it's a jobs program, actually. And what does Friedman say? He says, if it's a jobs program, you should give them spoons, not shovels. I think that that's actually the most useful mental model for sort of the fear about job loss, at least for now. Certainly, you know, the potential of AI is enormous.

because it is this incredible tool where we're moving not from spoons to shovels or shovels to bulldozers, but to the point where the AI can do so much work that we're actually just able to create dramatically more wealth. And I think that's really the dream that we have for, you know, peering 10 years into the future is I think there's a potential for an unprecedented level of certain

certainly scientific discovery. The AI is incredibly good at reading thousands of papers, you know, digesting textbooks, very good at chemistry. So I think we're going to see incredible levels of productivity. The story is fascinating to me because the alternative is what? Replacing people's shovels with spoons. Like, I think it's absurd on its face. Like, that actually is

a little bit like torture. Like, I feel like, you know, growing up, my dad would force me to work in the gardens. That alone was barbarous to me. But, like, if you made me do it with a spoon, what would we call that? That'd be torture, actually. I think the question that I posed to Sam that everyone...

seen most interested in is essentially, are any of these startups actually going to exist in 10 years? That was certainly relevant to the audience. Everyone felt that was relevant because if we do achieve AGI, what does that mean? How quickly can that actually just displace all of the work that we're doing here? And honestly, no one's quite sure, which makes this a very exciting time in technology. But

you know, it's very clear that we're able to achieve a lot more. And I think like throughout history, every time we've found ways to create

more wealth more rapidly, that's actually worked out really well. Historically, 97% of people were farmers or something like that. And now it's, I think maybe 3% or even less. And so we seem to be very good at inventing new work for ourselves and new ways to find purpose and meaning.

Well, what was the answer to your question? I think... Luxury real estate. You know, what are the things that people will value in the future? And if we get to the point where we really do have just a real abundance of the kinds of things that machines are good at creating. And so this is actually an idea I've been talking about for years.

10 or 15 years of almost thinking about the world in terms of machine money and human money. That really what we want to do is take the products of technology and create actually massive deflation. We want to drive the prices down to zero so that we're all able to afford

Certainly, medical care is something I think a lot about where it's really hard for most people to get really great medical care today. And I think that that's something where in 10 years, we're going to be able to make it so that the majority of humans on Earth have probably better medical care than we here at the table have today, which is, I think, going to be a huge achievement.

But at the same time, that's kind of on the machine money side of things. But then you think about the human money, what are the things that we get that we really value from humans? Like if you go to see live music, we seem to have a preference to see a band live instead of just sit in front of a bunch of speakers.

you know, or maybe robots playing music. Human money, I think, might be something that comes closer to just like an hour of your time, right? And that we actually have almost like a dual economy. That's super interesting. Embedded in that is actually maybe a better version of UBI. I mean, a bunch of the studies around UBI are sort of showing that there are sort of nice benefits here and there, but fundamentally it's not

creating a greater sense of well-being in the way that people hoped maybe like five or ten years ago. Yeah, it's definitely had mixed results. And I think a lot of that really comes down to, you know, people still need guidance in life. And especially a lot of the people who are targeted with UBI are not necessarily people who are in a great sort of social position to begin with. And again, I think that there's a lot of potential for AI to actually kind

kind of act as like a life coach. And again, like, you know, if you're fortunate enough to grow up with great parents and a great culture or something, you have a lot of advantages that a lot of other people maybe didn't have. And so again, I think like the great promise of AI is kind of taking like the best of what we have available and then just making it universally accessible because we're able to drive the cost so low. I mean, honestly, this past December, I spent a couple of weeks in Vietnam and you're in a developing country. And then you realize like,

There is so much that needs to be developed. There's roads, there's infrastructure. The whole country seems like it's under construction. I imagine that's what China probably looked like in maybe the mid-80s or mid-90s. But there's also this crazy optimism as it's building. If you have a robot, it could build your house. It could clean your house. It could take care of all of these things for you.

And that would radically change your day-to-day, your standard of living. And how much more direct can you be if you, rather than sort of just giving people

more human money. Let's give them the better way to live. Everyone can get here. But then I think there's a special thing here around the human money where the really remarkable things, nobody's guaranteed to have beachfront property in California or something like that. And that's where human money might go. Everyone has the basics and actually something that's probably

five or 10 times better than what even the most wealthy people have today. Yeah. Yeah. The way I like to think about it is I kind of think with AI, there's sort of

two forks on the road. There's the bad direction and there's the good direction. And I think the bad direction is one where it's used to just like constrain and control and essentially like imprison us. And the good path, which I think we're moving towards, is looking to say, how do we maximize human agency and freedom and our just potential to be kind of the best versions of ourselves? And we even have that today with some of the creative tools where

I don't have a lot of artistic ability, but with AI image generation or something, I can convey funny concepts or whatever. And we see that again with the design tools. Someone who can't code can now all of a sudden create basic apps and things like that. And we're able to...

realize our visions in a way that we were never able to do before. A conversation we were having earlier is how we are actually now on the good timeline on how AI is shaping. Yeah, absolutely. Because 10 years ago, this is a very different view of how AI could turn out. Do you want to kind of talk about that? Yeah, exactly. So I like to think about things on a 10-year time scale, in part because that's kind of how our startups work, roughly speaking.

We seed fund them, they come through YC, and then 10 years later, they IPO. And so I've been asking a lot of people about the year 2035, what do you want to see in 2035?

But also thinking backwards to 2015. And so if we go back to 2015, you know, 10 years ago was where we were having discussions inside of YC about artificial intelligence because we believed that we had sort of crossed a threshold basically in the early teens, somewhere around 2012 is where we started to really believe that actually

we'd broken through. I think everything kind of prior to 2012 was fake, in my opinion. But it was really deep learning that started to really deliver on AI. But when we were looking at this 10 years ago in 2015,

One of the big questions was, you know, is all reinforcement learning and what is the thing that we're reinforcing? Because at the time they were playing video games and trying to make the score go up. And this is, I think, also kind of where the paperclip maximizer concept and fear came from is like if you gave it the wrong objective function. And so we had a lot of fear that basically

that based on our own evolution, our intelligence arose as a survival mechanism, that we became intelligent and other animals became intelligent as a way to survive and perpetuate themselves. And we thought that if AI did the same thing, it would by its very nature want to wipe us out in order to maximize its own odds of survival. And what's happened in the last 10 years is we actually found the right objective function, which is simply to predict the next token.

And that actually intelligence in its most raw form is simply predicting what comes next. And so all of our really at a root level, what we're predicting, our reinforcement function is simply predicting what comes next. And that is the fundamental core of intelligence. And the great thing about that is we've been able to create this intelligence that doesn't have this drive to survive. It doesn't

mind that we spin up an intelligence and it does some work and then it disappears because it's just based on that ability to predict patterns. I would argue like the most important part of this is actually the agency piece. Venkatesh Rao has this crazy thing he talks about, and this is sort of a function of maybe the Uber and DoorDash era of things, where in society there's this like API line.

So either you are above the line, meaning you create Uber, or you drive for Uber. And obviously, that's a distillation of like sort of the last idea. And then in this AI world, basically, if you're below the API line in the old model, like you don't have agency, you sort of have to like, you know, play this never ending game, like the human is being doing the paperclip maximizing. And so there's this

other sort of world that I'm hoping we live in where it's humans not just writing the prompt and then the machine runs software and then this vast machinery and that's it and you can never change the prompt. Like that would be tyranny probably. It's conceivable that in the future we might have, and I don't know if this is the right thing, but you know, this sounds like something the EU would do for instance. It would probably mandate there to be a human in the loop on, you know, maybe the CEO of a company has to...

And that might be the case, right? It might be a form of like, we cannot use shovels here. We must use tiny little spoons just for this one part. Right. Yeah, I think kind of the fundamental error they keep making over and over again is taking a very static view of the world and then essentially trying to just regulate the current structure. And that closes off our ability to evolve and see into the future. And it's, again, very difficult and oftentimes implausible. So again, going back to 2015,

I mean, the conclusion of our thinking was actually that we needed to create our own AI lab because at the time, all of the best AI work was being done at Google. And they had, you know, Google had all the money, all the data, all the users, all the researchers.

And it kind of seemed possible that they were going to have essentially a monopoly and it was all going to be locked up inside of that system. And so we had this very sort of like loony moonshot idea to start at the time. We called it YC Research.

but it eventually got renamed OpenAI. And so OpenAI, we were going to take on Google with a small nonprofit, which sort of doesn't quite pass the laugh test, right? Like how is this little nonprofit going to be the one that actually develops AGI when the other companies have dramatically more resources? And then here we are 10 years later,

It actually happened. And at the time it just seemed incredibly implausible. Like no one would have believed it yet. Here we are on, I think basically the best timeline. Like we actually delivered it. We have

an open and competitive market with, I would say, at least kind of like six basically foundation models that are competing, including an open source one from Meta. And I think that's our best shot for preserving freedom is choice and competition. And talking about Google,

It actually is digging their traffic too. Do you want to talk a bit about that? About the stats that were... I mean, some of it is like, I don't think it's out there in the annual reports yet. And certainly like we did some research prior to this episode, we couldn't really find anything that conclusive, but maybe purely anecdotally, because we're in this pool of people who are very, very early adopters, very...

very much software engineers and our behavior interacting with the internet has changed already. It's not a surprise to me. Some people are starting to report in their referral traffic, Google referrals are down maybe 15% in the last year. And that certainly probably mirrors my own behavior. Like I still use Google, but I'm including

increasingly not clicking on any links in Google because there's sort of the snippet at the top or the first thing I think of is using ChatGPT with web or using perplexity directly. Yeah, exactly. I mean, if you want to understand the future, I think you always have to look at where the early adopters are. And so you say, you know, again, now if we go back

25 years, right? If we go back to the year 2000 or 1999, the early adopters were the people using Google. So at the time, people were like, "Well, Google is just kind of this fringe thing that maybe techie people use or something."

At this point in history, those same people or those same kinds of people who are the early adopters of Google are now switching their behavior to where your default action if you're looking for information is chat GPT or perplexity or one of these things. And even just observing my own behavior, I'll use Google mostly for kind of navigational, like if I'm just looking for a specific website and I know it's going to give the same thing, but it's starting to have that weird kind of like

legacy website, like I'm using eBay or something, vibe to it. Even earlier sign was the drop in traffic for Stack Overflow that actually started back in 2022, even before ChatGPT. And this was primarily because of GitHub Copilot. And they're down 60% this year. Yeah. Yeah, the pool of people here have quite a good track record of predicting trends, right? If you think of... By the pool, I mean just like technical startup founders at YC. Like

I remember 2007, Apple was back on the rise, but you could tell because just everybody who was in a YC batch was using a Mac. You could see the rise of AWS and the shift from rack servers to everything being in the cloud because all of the founders in the batch just started using AWS. Same thing now. Yeah.

I've spoken to a bunch of founders and just personal productivity, they just have ChatGBT open all day. I've had founders say they're just constantly screenshotting their desktop and just sending it to ChatGBT if they need to debug something or figure out how to navigate a government website. One rather example is like, "I need to set up some registration. Here's a screenshot. Just tell me exactly where I need to click in order to do this quickly." The only thing that we saw

last year in the summer batch was how so much of the batch is using Cursor and is one of the companies that's been growing a lot very quickly.

Anecdotally, they hit 50 million in revenue. I think we may have mentioned this in another episode, but I can't think of another tool that's got adoption so quickly within a YC batch as Cursor. It just went from nothing to... From one batch to the other. Up to 80% of the batch using it from one to the other when the previous batch was single-digit percent. Some people mentioned it felt like a technical conference a little bit. And a lot of people were trading notes on...

how to hire the best engineers. And a few people said, you know what, like if someone comes in and I ask them if they use cursor or any code gen tools and they say no,

Right now, I can't hire them because they're not going to be able to be as productive as the rest of my team. I think that's an extension of something Stripe started a decade ago, actually. Like in general, engineering interviews and technical interviews, most of the value copied Google, I would say. It was like whiteboards, yes, problems, which probably made sense for Google and what Google was looking for. But I think Stripe were the first around like 2011, I think they started doing this with like

We don't really need you to wipe all the CS problems. We need you to develop web apps really fast. And so just give someone a laptop. And the idea was you basically sit in a room and you just build a to-do list app or whatever you can as quickly as you can. And you basically measured on your max output in those two or three hours.

And so I think if you follow that line through, then it's like, well, it doesn't really matter. Like if there's whatever tools they use, the question is just like the bar moves higher. Like you've got three hours, build what you can build. And it's like, you should be able to build a lot more with cursor than before. If you still believe that you're sort of looking for fundamentally, how clearly can people think or solve hard architecture problems, then you're

sticking to whiteboards. What do you think this means for SaaS? Because one of the crazier things we've been seeing is that Klarna claims that they're not even buying new SaaS tools anymore. They're using CodeGen and not even hiring new engineers anymore, using their existing engineering set of people. They're just going to replace all the SaaS tools they use to run their fintech.

I definitely heard stories like that. One of the unconference talks was actually specifically about that. This is a company I think we mentioned before. It's a company called Jerry that is now halfway to $100 million a year in revenue. But a few years ago, they were still burning $5 or $10 million a year. They had crazy customer support problems. Basically,

Basically, GPT-4 dropped, they implemented it, and then now it totally changed the way they hire. The prompting itself is actually in the hands of their head of customer support. And so they have a PM, they have the head of customer support, the engineers made it, they don't have to touch it. It's mainly a prompt management and workflow tool. And it literally cut their customer support team and their budget for that side by half. And it turned a company that was not able to grow

and burning $10 million a year to a profitable company that is cash flowing, that is also compounding its growth at north of 50% a year, which is like a dream scenario. Yeah, this is a great example, actually, I think, of the way in which AI

is creating wealth, right? Because there's a whole category of businesses or products that would not have been economically viable or, or even possible to create before, um, that are now possible. And so we've actually just like expanded the universe of, of possible businesses. Yeah. It's never been a better time to be a founder. That's for sure. There's definitely been a vibe shift in, um, the attitude towards just building companies. Like, uh,

Start with hiring a number of people, for example. Certainly 10 years ago, the general sense was that if your company was growing fast and revenue was ramping up, then you would go out and race around. A metric you'd hear a lot was like, how many people are you at? How many people did you hire this year? How many people are you going to hire next year? A bit of a vanity metric.

It just seems to me now, like the companies that are reaching these numbers we're talking about, like a millionaire are trying to get to 10 or 15 or 20 are doing it with less people and expect to do it with less people. Which is the new thing. This is why so many of them really haven't even raised a Series A, which there's less need for...

for hiring a lot of people to do a lot of the operations on. Or maybe going to your analogy, Gary, the previous generation of startups had this concept below the API or above the API. So you had a bunch of people that had to kind of build and operate the API. Like if you had to build a business like Uber or Lyft, DoorDash, marketplaces, you had to do that.

hyperscale of hiring lots of people. The funny thing about that era was there's this concept that I think was probably appropriate for that era called blitzscaling. There was an entire book about it. And the idea was basically, I think it was born out of this descending interest rate world while at the same time, like,

if you put more money into something, like you had these network effects. So if you played that out, yeah, you want to blitz scale. You want to hire as many people as possible. You want to grow faster than everyone else. And then because of the winner take all dynamic, like the world capital markets, we're just going to funnel you tens of billions of dollars, hundreds of billions of dollars even to,

you know, subsidize growth to be the winner. And, you know, that was the game. And I think like from what we can tell from all the people, you have more than 300 founders right here sort of sharing their stories. And I don't think I heard blitzscaling or I'm trying to hire as many people as possible, like at all. Nobody is bragging about, hey, you know who I'm hanging out with, these unicorn, you know, I'm going to be a unicorn. Like people are literally not bragging about

Yeah, it's all about leverage, right? Now the real thing is how much you can do with a little bit of resources because we have these magical tools that give us superhuman leverage. Part of it is like this, there's going to be a longer tail of businesses that are possible only now because of AI. And this longer tail is going to be also fatter. It's not just companies are doing 20, 30 million revenue, but more than hundreds of millions of revenue. And it goes back to the episode when we talked about vertical SaaS.

There's just more willingness to pay for this new category that people are still trying to figure out how to price. That's why there's just so much willingness to pay because people want it. It doesn't go just in the software budget for a company. There's budget from the AI chief officer or something. I don't know if that's a title that has come out yet. Aaron Levy made this point too. One thing I'm certainly noticing is that the companies that are hitting these big revenue numbers, trying to sign these contracts, it's actually sort of usage-based pricing.

And the detail, it's not necessarily the paying per use, but the pricing is tied to how much you use the product, which is definitely how it's close to how you would think about it as selling services and software per se. Obvious ROI, right? So the problem a lot of times with selling a product is the customer doesn't really know if they're getting the ROI. And so that makes for a long and painful sales cycle. But if you're able to drop in something that pays for itself in the same month, that's an easy sale, right? Yeah.

I think the way they've kind of priced it is more like services and is really akin to this is how intelligence is getting priced. So on the spectrum of like people thinking, not worrying about just the big picture, is AI going to make us all obsolete on one end and the other end, like

existential, like philosophical conversations. Some of the stuff I thought that's interesting in the middle is it's just hard to predict the timeline of the tools themselves. So there was some interesting talks about like rag, for example, I think Sam maybe seeded this with his talk about like, if you have like infinite context, huge context windows, you even need like rag or retrieval tools at all. And I think that's like,

That's the kind of thing where it's like, as a startup or a builder right now, I think people are more concerned about, am I using the right tools? And is this going to still make sense in three to six months? I think that's actually a direct consequence of if you're an AI lab,

you're on the frontier. And so how you know that your thing is working is actually like your model is bigger, you're farther along on the scaling law. And so when I meet people from AI labs, they almost all talk about bigger, better models, but they're model makers. And then obviously, we also spend a lot of time with very scrappy founders who have very little capital.

There are just as many talks sort of on the other side, which was, you know, I went to one that was very much about systems level programming. Like if you want to have, I think it was Tavis. So Tavis is building this real time AI avatars with video and audio that are very realistic.

Part of the trick is they got it to very low latency. 600 milliseconds, which is really fast. Which was even too fast that some of the customers, "Oh, no, no, no, it's too fast." It's a bit uncanny when it's too fast. It's like now it's being rude. It's just interrupting me every time. So a lot of the build is SDK for other companies. So a lot of the products are getting built with this Zoom video interface with another human.

it is using them. So I love their talk because it's a good illustration of like, yes, like the labs are going to continue to do their thing, you know, and maybe on a more fast timeline than we even imagine, like, you know, nine months, 18 months, like, you know, maybe it's even every three months. There's sort of these like breakthroughs. So if I had to guess, like that's sort of, you know, when people are in their heads being like, why should I do any of this? Because open AI models are just going to be infinitely smart and I should just lie down on a bed. Yeah.

What I would say is I'm actually heartened by all the stories that I heard. Will the models change? Will the technology change? Will Tavis change its stack? Yes, they've already seemingly rewritten their stack multiple times to take advantage of what's been going on. Their product has only gotten better in the marketplace as time goes on. And then what will it look like? Will there be a model model? Maybe not. The same AI labs today that are talking about

There's going to be a trillion token context. It's like, man, how much is that going to cost? Ultimately, engineering and systems, those matter. Those are actually the most valuable things right now. And then along the way, you're going to have these golden evals. I hate to bring in consulting speak, but what are the moats? And the moats in the end are a brand. It's data that no one else has.

Sometimes it actually literally is caring about customers that, you know, the giant company will never care about, right? Actually, I think the other mode is going to be ultimately startups move quickly. One of the remarkable things that I observed, a lot of the founders actually have rebuilt a lot of their tech stack.

to be with the latest. They were very willing to, oh, this particular approach to RAC doesn't work or vector database, throw it away. PG vector became the better thing. Let's use that and just throw it away and use the best thing. So what was fun to see is I think the best startups are going to be the ones that can build the fastest and be willing to be at the bleeding edge and be willing to reevaluate assumptions on what's the best approach. And I heard a lot of how things got built. They redo it.

or they do it again with the best, with the latest and best. Which is also another reason to explain why they're securing enterprise contracts and these big contracts faster than ever. Big companies have never been great at continuing to build great software, but now if you need to constantly rip and replace the tool you're using every three months to be at the bleeding edge, it's going to take three months to get the meeting schedule to discuss whether we should reevaluate the tools. We're going to plan in the next

Yeah, we can get to that in like, you know, 2029 for sure. Yeah, and these companies are getting to the six or 12 million example. I know they actually have rewritten a lot of their tech stack a lot of times. And the architect, every time I actually talk to them, oh yeah, we threw away that thing that we told you. It's this new way of doing it. It's like, okay. And that's like every month.

month, every other month. From talking to founders this weekend, what was your sense of the overall vibe? I think it's pretty exciting. I mean, I don't know that there's ever been a better time. Again, just kind of looking back historically, really the foundation of YC, if we jump back not 10 years to when we were starting OpenAI, but 20 years to the Summer Founders Program, the thesis behind why Paul

and teams started, YC was the realization that it was getting easier to build startups. You didn't need to raise a mountain of capital and hire a giant team that actually just a couple of smart kids could build a web app. And that trend has only accelerated now with AI, where you can build an entire $12 million business or something with just a handful of employees. And so it again goes back to technological leverage

enables people who have sort of ambition and insight to do incredible things. Well, that's all we have time for today, but we'll catch you next time on The Light Cone.