Startups can now generate tens of millions of dollars in revenue within 24 months, often with minimal initial investment. This is due to the rapid transformation of AI pilots and proof of concepts into real revenue, enabled by advancements in AI reliability and infrastructure.
Initially, there was a consensus that the ChatGPT store would dominate the AI app ecosystem, crushing other startups. However, the store turned out to be insignificant, and many successful AI applications, like Perplexity and Glean, emerged independently of OpenAI.
The open-source movement, including models like LLaMA, has democratized AI development. It has allowed startups to build on multiple models, reducing dependency on a single foundation model and enabling more innovation in AI applications.
Model routers have become crucial for startups, allowing them to use the best model for specific tasks, such as speed or complexity. This flexibility has become a key entry point for building new AI-powered applications.
Startups are growing at an unprecedented rate, with some achieving 10% weekly growth during their YC batches. This has led to faster revenue milestones, such as reaching $1 million ARR in record time.
Vertical AI allows startups to create highly specialized applications tailored to specific industries, such as legal tech or customer support. This approach has proven to be highly effective, as different verticals require unique workflows and solutions.
AI coding tools like Cursor and Replit have made programming more accessible and efficient, allowing non-technical users to prototype applications. This has led to a significant increase in productivity and a change in how startups approach hiring and scaling.
AR/VR hardware is constrained by physics, requiring significant advancements in optics and compute power to achieve a lightweight form factor. The lack of compelling applications has also hindered widespread adoption.
Regulatory concerns around AI, such as the Biden EO, have eased, allowing startups to innovate without the fear of overly restrictive laws. This has been a significant boost for the AI startup ecosystem.
Amazon's internal AI applications, such as large-scale code migrations, could be released to the public, similar to AWS. This could create new infrastructure opportunities for startups, enabling them to scale more efficiently.
The wildest thing right now is you can start a company that can make tens of millions of dollars literally in 24 months and you can do it for potentially 2 million, 5 million dollars. A year ago, I remember many of the startups in the batch would get sort of enterprise proof of concepts or pilots in particular. And there was a lot of cynicism around whether any of those pilots would translate into real revenue. Fast forward a year, I think we have all
first-hand experience that these pilots have turned into like real revenue. It's still early days, honestly. Like, you know, we sort of breathe a sigh of relief right now in 2024, but it's anyone's game, honestly. Like these things are moving so quickly.
Welcome back to another episode of The Light Cone. I'm Gary, this is Jared, Harj and Diana, and collectively we funded companies worth hundreds of billions of dollars right at the beginning. So 2024, what a year. How are you feeling about this, Harj? Pretty great. I think this is the year that everything broke in favor of startups. What I've been thinking about a lot recently is when ChatGPT launched two years ago now,
The immediate consensus view was all of the value would go to open AI. And very specifically, do you all remember when they announced the GBT or the chat GBT store? Yeah. I remember the consensus was everything that was built on top of chat GBT was a GBT wrapper and the app store was just going to be released and crush every single person trying to build an AI application and everything.
OpenAI would be a ginormous company, but there'd be no opportunity for startups. It sounds kind of ridiculous to say that now because- Who even remembers the ChatGPT store? Exactly. The ChatGPT store itself was a nothing burger. But more importantly, what are the big AI applications today? I'd say outside of ChatGPT itself, the breakout consumer application is Perplexity. The breakout enterprise application is probably Glean, maybe. Yeah.
In legal tech, you have case techs, you have Harvey, prosumer, you have PhotoRoom. The point being, there are many, many applications that have been built not by open AI. It's been a great time to build startups. Yeah. The wildest thing right now is you can start a company that can make tens of millions of dollars literally in 24 months from zero.
And you can do it for potentially, you know, $2 million, $5 million. That's sort of the story of one of these companies, Opus Clip, which never had to raise a real Series A. And that's something that we sort of see across the YC community as well. Yeah, I think that's a particularly important point that you can do it as a startup without raising tons of capital because pipeline...
Post the GPT store launch, I then remember Anthropic and Claude emerged. And the consensus view for a while was all of the value is going to go to one of these foundation model companies. And that the only way you can compete in AI is to raise huge amounts of money, either because you've got venture capital or you're Amazon or Facebook or Google with tons of cash already. But that if you weren't one of the big foundation models, there would be no value.
And the applications built on top of these things would...
either be built by the foundation model companies themselves or just not be that valuable. Again, something that turned out to be completely not true, right? And in particular, what drove that is open source, like the weird series of events where the weight's being leaked and like Meta just like rolling. Torrent. Yeah. I was going to force the hand for Meta to launch Lama, which was funny. And people thought, oh, it was just this cool open source model, but it was
18 months behind OpenAI and people started doing a lot of derivative work out of it. It's like Vicuña and all these other animals related to LLAMAs that came out. And it took the, or LLAMA is one of the companies at YC as well that enable people to do local kind of Docker development, like models running on device. It was pretty cool.
But people didn't think that they were going to be able to catch up. And the thing that changed from 2023 to 2024
is that during the summer, it was a turning point. It was the first time that the top foundation model in all the rankings, benchmarks, was Lama. And that was a shock to the community. Yeah, so it turns out choice matters. And choice means that it's not as much about the model. I think the model still matters quite a lot. But once you have choice in model, it means you can't have...
This sort of idea of monopoly pricing. And you have that model, your competitor also has that model, but all the other things seem to end up mattering a lot more, which is product, your ability to sell, your ability to actually adjust to user feedback, your ability to get to zero churn. All of those suddenly become far more important than capturing a light cone of all future value through the model. Right.
A very specific way I felt this is I remember a year ago working with startups in the batch that were essentially building model routers, just like an API to call like a specific model. And I remember a lot of the motivation for that at the time was reducing cost. It was like, oh, you don't want to just burn up all of your
chat GPT calls, you want to spread them out across like various different models. And the argument against that was just, oh, like the cost of all this stuff is going down to zero anyway. Like there's no value to be had in being like a model router and no one wants to build their applications with a model router. They're all just going to call whatever's the best model. I think fast forward a year, like that's totally not true. Like from what I can tell, the model router was actually a really great entry point into just building sort of a new
stack for building LLM powered apps. And most of the applications we're seeing, I think they just don't want to be beholden to a specific model. Does that map with what you've seen? Yeah, actually, one of the things we've seen now in the fall batch that just presented at Demo Day, which is one of the trends that shifted from summer 24 and winter 24, was precisely what you're saying. Companies started to use multiple models for the applications, like the best one for the
speed at some point because sometimes you need to parse a lot of the input very quickly. It's fine if it's a bit more lossy. And then you need the bigger model to handle the more complex task. So a lot of companies in fall '24 have this actually multiple model architecture to use the best one for the best task, which is similar to the concept of the model router, but it was the idea evolved. Instead of being more of a routing, it was more of an orchestration.
I think a concrete example we gave a couple episodes ago was Camphor. It was a company you work with. They use the fastest model for parsing PDFs and the more complex ones they use O1 and that's how it's done. And other companies are doing fraud detection. They have this concept of a junior risk analyst where they just use fast and easy GPT-4, maybe
mini and then they use the bigger one with like 01. Or the other example is, I think Cursor talks about it in their episode with Lex Friedman. They also have this complex multi-architecture with multiple models and this is why it works well. It's like they do one very specific for predicting
what you're going to type next but one for understanding the whole code base so very different tasks so that's definitely happening now yeah the other thing that uh popped up for fall batch there's a company i'm working with um called variant and uh what they're trying to do is take basically state-of-the-art open source llm models that can do code gen and then teach them aesthetics
So, starting with icon generation, and so they built this huge sort of post-training workflow that should work on, you know, as the open source models get smarter and better at Cogen broadly, they can just...
take the next version of that, and then take their post-training architecture and dataset, and then basically teach a given model aesthetics. What a certain thing is supposed to look like, and not in a diffusion sort of way, but actually at the SVG level. We think SVG will actually translate into all kinds of aesthetics.
It's an interesting approach and one of the newer ones in that post-training is a whole coherent way to sort of skip the whole idea that all of the value is accruing into the model, especially because of open source, to your point. The other thing I've...
been having flashbacks to is a year ago i remember many of the startups in the batch would get sort of enterprise proof of concepts or pilots in particular and there was a lot of cynicism around whether any of those pilots would translate into real revenue um lots of parallels to crypto and how anytime there's some new interesting technology well
blockchain more specifically than crypto. But anytime there's a new technology, enterprises always want to run pilots and POCs because it's someone's job to like check off. Yeah, we did the like hot new technology thing. The chief innovation officer must have his due. We've spoken about this one of our episodes, I think. And fast forward a year, I think we have all
first-hand experience that these pilots have turned into like real revenue and if anything the startups in the YC batch now are going to sell into real enterprises faster than they have before and are ramping up revenue and reaching milestones like a million dollars ARR faster than I've certainly ever seen. Yeah, the fall batch just did this actually again which is actually the first time I think we noticed it was actually the summer batch of this year
And one of the funnier things that we realized was, do you remember when Paul Graham would tell us how fast you needed to grow during the YC batch? 10% a week. 10% a week. And the wild thing is, in aggregate, across both summer and fall batches, that's what those batches did. Wow. Which I don't think ever happened. Which is 3x over the course of YC. Yeah, 3x over the course of YC, which I don't think has ever actually happened. On average. On average. It was only the best companies that did that, which is the top.
quartile or something, right? So the companies are better. The general thing that is true is that the time it's taking to reach $100 million in annual revenue is trending down.
Yeah, and not only that, we had dinner with Ben Horowitz recently, and remember he was saying when they started Andreessen Horowitz, the common understanding was that in any given year, there'd only be 15 companies that year that would even make it to $100 million a year revenue. And they said they ran the numbers the last 20 years, and every decade,
the number of companies that could actually make it to $100 million went up by 10x. So what was 15 per year maybe 20 years ago, I mean, we're talking about 1,500 companies a year that have a real shot at actually making that number. And when you combine that with what we're seeing in the summer and fall batches, it's not that surprising.
And Jared had a really good argument on our last episode about how vertical AI is going to enable this to have 1,500 plus companies to bloom. Yeah, that's why it's growing so fast. It's because the value prop to companies of these products is so incredibly strong that like,
like they're just flying off the shelves because like companies are smart and they can do an ROI calculation. And when the ROI is fantastic, all these truisms that people believe about enterprise sales cycles and like credit to get big enterprise deals go out the window because companies are smart and they'll make rational decisions. You know, Harjit, there's another way that this broke in favor of startups that I was thinking about.
It's hard to even remember now, but a year ago, one of the things that people said a lot was that these LLMs
are not reliable enough to deploy in the enterprise. They hallucinate. Yeah, that was why a lot of people said these pilots and POCs won't translate into real contracts is because it's too risky of a technology for people to actually deploy. Yeah, and not only is it translating into real revenue, but it's translating into real deployments that are being used at large scale, doing thousands...
of tickets a day. And I think it's because we've learned how to make the agents reliable via the kinds of techniques that Jake talked about when he was here. And just all this infrastructure has grown up around the models that's enabled people to make them reliable. That's actually a big trend. This year is this concept of thinking of AI more as agentic.
That is a term that kind of bubbled up a lot this year. It was not in the bubble space of conversation last year. Last year was more about a lot of things that were kind of very chat-like conversations.
I mean, that was kind of the riff on it, but now you remixed into a bunch of agents for XYZ. And we just, I mean, you just put out, Gary just put out a great explainer video about computer use from Claude, but just the capability of the models keeps pushing in the direction of just being able to do like complex multi-step things and actually take over your computer and call other applications and perform complex tasks that just didn't seem possible a year ago. Yeah.
What about regulation? Seems like we sort of dodged a bullet there with 1047. And it looks like some of the Biden EO is not that likely to survive the Trump White House. TPD, what that means in the longer term. But certainly one of the things that we were very worried about was that some certain amount of math beyond a certain level would suddenly become illegal or require registration at your local office. Yeah.
It's certainly been a weird time to be in tech because I've never experienced software and technology intersecting with politics so much. And in particular, I'm not used to genuinely caring about national politics affecting startups in a YC batch or just companies that are less than a year old. But it did really, it did really for a moment was worrying. It wasn't clear whether the startups would actually be able to build innovative AI applications versus
suffering from regularity capture from OpenAI and a few big players. We're obviously very glad it broke in the favor of startups. Seems like we're still in the early game, right? I mean, it's very easy to see that
The platforms themselves really will or could possibly resemble the Win32 monopoly. Windows has access to the APIs. They, in fact, know all the stats about what's working on their platforms. And guess what? They can build it into their platform. We sort of breathe a sigh of relief right now in 2024. But
You know, it's anyone's game, honestly. Like these things are moving so quickly. I wouldn't totally breathe your last sigh of relief yet. You know, we got to keep working on this. Okay, so it's clearly been a great year for startups. What else has been happening? Who else has it been a great year for, do we think? There's certainly been some big funding rounds, right? Like opening eye, unsurprisingly,
has raised huge amounts of capital. Scale. Yeah, even within YC though, we've seen like scale AI has really broken out this year. $6 billion for OpenAI, $1 billion for scale, $1 billion for SSI, the...
new Ilya Setskiver startup? Scale, I think, is just worth talking about because it's such a classic startup story. I mean, you were there in the early days, right? You interviewed them for YC. Tell us what the idea was that they interviewed with and how they ended up landing on what is probably one of the best
startup ideas of the last 10 years. The fun thing about the scale.ai story is that it is the sort of epitome of the like classic YC startup story. And there's other kinds of startups that get started, you know, like SSI, for example, that's not a typical YC startup story where like
some very well-established people raise a billion dollars with a PowerPoint pitch. But Scale.ai is the classic story of how young programmers can just gradually build a $10 billion company over time by being smarter and harder working than anybody else. And so yeah, when Alex interviewed at YC, he wasn't working on anything related to AI. It was a completely different
different idea. And the idea for Scale.ai kind of got pulled out of him by the market. And it's actually still like several pivots because like the original idea at YC didn't have anything to do with AI. And then for a long time, he was basically doing data labeling for the self-driving car companies. They applied, as I remember, they applied like a healthcare related idea. Yeah, it was a website for booking doctor's appointments.
Okay, cool. And then they pivoted during the batch. Do you remember how they came up with the data labeling idea? Because this must have been, was this 2016? Yeah. The way they came up with the data labeling idea was that Alex had worked at Quora and Quora had to do some data labeling for like moderation and stuff. And so at the time, the big data labeling service was Amazon Mechanical Turk.
And they were deemed unbeatable because they were like run by Amazon and Amazon could throw infinite money at it. It was always at scale. It was like quite large scale already. But Alex had a unique insight, which is he'd actually used Mechanical Turk at Quora and he knew that it kind of sucked to actually use it. And so he had this sort of like unique insight on the world. And so he just tried to build a better Mechanical Turk, basically the version he would have wanted when he was at Quora.
And as I remember it, like really, their early traction came almost entirely from one customer, Cruise, right?
there was just an unprecedented demand for labeled data for training sets that just hadn't existed before. And so they were able to ride that wave. And then as that wave was like cresting,
LLMs got big and all of these companies needed to do RLHF at very large scale. And scale was just like perfectly positioned to move into that business as well. Yeah. I think the scale story is just so interesting because it was pre-LLM, it was clearly a multi-billion dollar business anyway. And it caught the LLM wave, which is now propelled into probably it's going to be like $100 billion plus company. And it
And I'm seeing that at the ground level too, where many companies I had that maybe finished the batch or even pre the batch didn't have an idea, pivoted into an AI idea that's taking off. I'm just seeing much more success in founders who waited out and
can find an idea that they just couldn't before. I have a company from a year ago. They proved it. They proved the whole batch. They can find, um, a great idea. It actually took them six months off the batch until they realized, um, one of their parents ran a dentist office. So he just decided to go hang out at the office to see like if there was anything he could automate. Uh,
And they just ended up building an AI back office for dentist offices. And now it's just like their week over week growth is fantastic. It's doing really, really well. And I'm seeing lots of cases like that spring up. Definitely seeing that as well. I think there's something about the advantage of having all these very hardcore young technical founders that are willing to kind of just bet the farm and go all in on everything.
Just a little bit of a glimmer of, oh, this is where the future is going to be. Let me just try it. And then it actually ends up working. Like I started with the dentist. I have a lot of teams that
pivoted as well into different spaces where they kind of found that glimmers like, oh, computer use came out and I have a couple of companies that are working and betting and going in that direction. And it's like working well. I mean, it's still early. I mean, this is just the fall batch, but that's cool too. Okay. So what are some of the, the trends that we've seen or some of the specific trends and waves that startups have been riding coming out of the batches? Voice AI is something we've talked about. It's clearly, um,
maybe the most promising vertical for AI right now in terms of just raw traction. Do you think voice is a winner-take-all, or will it be something that has sort of a hundred different verticals that are very tailored to those specific verticals?
That's literally one of the questions I get from some of our voice AI startups themselves. They're like, should I be going horizontal or should I just continue to grow within my vertical? It feels to me like voice itself, voice is just like AI. It touches everything and there's so many different applications for it that it's
you can there's probably infinite applications to build where voice is the interesting element of it i mean things that just spring off top of your head like language learning applications i'm sure there's not going to be just one really cool voice ai powered language learning application it's probably going to be multiple of them remote work like um
teleconferencing, probably like a whole other area where there's interesting things to do with voice AI. And even within customer support, we highlighted a number of companies we talked about last time. We've got many power help, Kappa.ai. Yeah. It turns out that customer support is not really one vertical. There's many different flavors of customer support, and they're very different on the inside once you get into the details.
Because I think there's very specific types of workflows you need to do per industry. And that's to the point of why vertical AI agents are going to really flourish. I mean, same thing for voice. It's just very different workflows. If you're building the
I don't know, the voice agent to do customer support for an airline, very different than doing it for a bank, very different than doing it for a B2B SaaS company, et cetera. Yeah, I guess that question of is there going to be pure horizontal integration is sort of like saying, will there only be one website? Yeah.
MARK MANDEL: It'd be like saying-- there's just going to be both. There'll be horizontal infrastructure companies that do really well in vertical applications. To say otherwise would be like saying, oh, Stripe powers payments on the internet, and it's also just going to have all the most valuable applications that accept payments on the internet. It's just not how it works. There's enough value at just being the horizontal infrastructure layer. So I'm sure there'll be great voice AI companies that just make it really easy for you to build your own voice AI application, while there'll also be hundreds of really valuable vertical.
What are the other trends that we've seen besides voice? We were talking about robotics earlier. We are certainly working with more founders building robots this year than I think any year ever. What's driving that?
I have an ex-Apple team that's called Weave Robotics that they're going to try to ship a real robot in 2025. It costs about $65,000, $70,000. But that's actually what it costs to have the actuators and the safety needed to actually have it work in your home. I think it's actually driven by this idea that
the LLM itself can be sort of the consciousness of the robot. Like, am I doing this thing that my owner needs me to do? How do I actually interact with them and the other people in the household? But it's funny because then the voice language action model that might actually do a certain thing, like fold laundry, that's almost tool use inside of the broader LLM consciousness. So I feel like that's one of the things that
I'm excited to see, you know, will it really work? And I think we're going to find out this year. I guess the way I think about it, robotics is basically half AI and half hardware, half
half of the part of the equation is starting to work. Well, the hardware is still hard. The hardware is still very expensive. Yeah, there's some evidence that being able to actually do laundry, for instance, like that might be one of the first things that gets shipped. I think the dream case for startups is going to be that you can build just the AI or the software piece of it and run it on commodity hardware and do really great things.
The opposite case would just be actually if you need to be good at the hardware and the software and they're coupled together and you need to produce both, then you would expect Tesla to be the obvious winner in the space. And it remains to be seen. I'm pretty optimistic. We have multiple companies, I feel, that are trying to be creative on how to run the models on commodity hardware for specific purposes.
use cases. It still feels early. It feels like the robotics hasn't quite hit its chat GPT moment yet. Maybe the moment is self-driving cars have been working in San Francisco. I don't think it's talked enough. People who don't live in San Francisco or often don't realize the extent to which these are fully deployed in San Francisco and regular people are riding them every single day. Yep. I saw Tony from DoorDash recently and he said he exclusively uses Waymo everywhere.
I live in Palo Alto and I have no option for it, but I'd love to. It'd be amazing. I mean, the wild thing is there are only a few thousand of these deployed right now in the entire world. And how lucky is it? They're all in San Francisco. Yeah. What about big flops for 2024? I seem to remember that we started one of our Lightcone episodes all wearing Apple Vision Pros and Quest. And we have not talked about AR since. Diana, what happened? What?
It hasn't happened. There's this moment for a lot of the hardware that needs to be a lot more lightweight. Like we need to get to this form factor, but there's actually constraints with physics to fit all that hardware in such a small form factor. And in order to have enough compute and the optics to fit, it's just super challenging. And I think there's still more actual engineering and physics that needs to be discovered. And that's it. I think the algorithms are there, but it's just lots of really hard work.
and optics problems. It's a tough chicken and egg problem because there's not enough hardware in people's hands for it to be worth it for app developers to build apps. And so there's not enough apps for people to want to buy the hardware. And I feel like the people who did buy, like the killer application so far seems to be using it as a,
a really large monitor. It doesn't-- MARK MANDEL: And it does work very well for that. GARY ILLYES: For watching movies. MARK MANDEL: You've actually retained this as a user, Gary, right? GARY ILLYES: Yeah, it's great for watching movies. Yeah. GARY ILLYES: Maybe the one device that I think actually been playing and actually feels good is actually the MetaRay Barn.
It doesn't have any of the actual displays, but I really like it for the audio and voice. And one workflow I've been trying out is actually using the Meta Ray Barn and connect it to any of the voice modes for either...
ChatGPT or Claude and kind of have a conversation with it about a topic. Oh, I haven't tried that. That's an interesting idea. Yeah, yeah, yeah. That's a great idea. That is like a fun thing that I've been doing and just chatting with myself. Maybe look a little bit like a crazy person while you're walking, but it's been fun to kind of learn about different topics. Should we talk about AI coding?
2024 was the year that AI coding really broke out. We had the majority of YC founders now use Cursor or other AI IDEs. They just exploded over the summer. Devon proved that you could fully automate large programming tasks. Yeah, all that was this year. That's pretty wild. Replay agents continue to improve. I hear more anecdotal stories of
people building Replit apps on their way home from work. Yeah. Being really impressed. Replit took this technology and popularized it among non-technical people for the first time. That's really crazy. And an even more lower technical version is Anthropix Artifact, where you can actually prototype very simple apps and chat with Claude to build really simple front pages.
And then you could prototype stuff as a PM and show it to your engineering team. And it's like a full-fledged working version. Yeah, it's wild because it just means that one person can do so much more. And do you think it's going to change the nature of how startups are actually hiring? Are you seeing this yet? Yeah.
Some of the founders I've met who recently raised their seed rounds coming out of YC, they're not really approaching it how maybe the classic advice would teach them. In the past, you might say, let me try to find...
Let me try to hire more people. There are certain tasks that normally I have to find the person who did it at my competitor who did all of customer success. And I need to find that person who's under the person who runs that function. And I've got to hire that person and promote them. And they're going to come with all this knowledge and people networks. Some people are saying sort of the opposite, which is I'm going to get my software engineers to write...
processes that use LLMs up front. And, you know, I probably will end up needing to hire that person, but maybe after the series B or C and not right now. Yeah, I think I've seen that as well with companies after the batch where they're looking for people
engineers that have more upside and they're really fully native with a setup with an AI coding stack. And part of the, one of the clever interview tricks I've seen is people do pair programming and watch them use the tools. And you can really tell if someone really has tinkered with them. It's actually an engineer cat that is not only good at
coding, but also prompting and telling when the AI output is not correct. I think the part of reading prompting
and evaluating the output of all these AI coding agents is actually a lot more critical. Yeah, there's been an interesting controversy this past year about AI coding agents and programming interviews because AI coding agents basically broke the standard programming interviews that companies have been doing for years. Actually, Harjit, I'm curious what you think about this since you ran a programming interview company. I mean, I guess the interesting debate is whether you should...
penalize or prevent people who are interviewing at your company from using cursor or one of these tools to ace your programming interview or whether you should just lean into it and adapt and test to see how productive they are and
I generally think the way these things tend to err is more in that direction that I think you'll just become you'll just be measured on your absolute output and the bar will go up I think like Stripe for example were early on this about a decade or so ago where they recognized that so much of what
they needed their programmers to do would like build web application and web software and not do hard CS problems. And so the industry shifted away from the Google style interview of lots of computer science problems and whiteboarding to just give someone a laptop and make them build like a to-do app in like four hours. So I think we'll just see the same thing happen where people will just, the industry will just adjust and you'll just be
using these tools and just be expected to do a lot more in like a two hour interview than you are today. To your point, Gary, around just the startups, like maybe how many people they need to hire or just like how they scale. It seems too early to see like dramatic effects on that yet. But one thing that I'm interested in is I,
I watched an interview with Jeff Bezos recently, and he said that, well, one, he's back at Amazon working on AI, and two, that apparently Amazon itself has like 100, maybe it was 1,000. It was a surprisingly large number of internal LLM-powered applications, presumably to just run Amazon. The last time Amazon took something it ran for internal infrastructure and released it to the world was AWS, which completely changed how startups are built. So I'm curious to see if...
they have interesting applications to run Amazon internally that they'll just release out and suddenly like there'll be new stacks to just build and scale your companies on. And we'll see the whole, something that we've talked about in recent episodes of the 10 person, the one person unicorn. - One of the applications that they talk about is they did this giant migration for an old version of programming languages. Whenever you need to upgrade different versions of database or et cetera, it's like a lot of work.
And they use LLMs for it. It was like changing hundreds of thousands of lines of code. And it would have taken an engineering project of six months or more. It was done in weeks. I mean, Amazon is just such a perfect use case for LLM-powered agents doing back-office processes. They must have just an absolute goldmine of opportunities. And they just launched their big foundation model, actually, that is starting to be...
top in some of the benchmarks as well. So I think they're trying to be another contender through this race. That's interesting because, uh, like from the bottom up, like certainly from some of the people who still work at Amazon, maybe right out of college, many of them do not have access to LLMs or are actually barred from using it from in their day to day. So, you know, maybe that's one of the downsides of organizations when they get big enough, um,
The future is already here, but it is not evenly distributed even within the same organization. But that bodes sort of well for both open source and sort of self-hosting LLMs. It's on my to-do list to build my own stack of Apple minis and run Lama on my own little cluster on my desk. I bought all the hardware to build my own machine, but then we had a baby and it hasn't happened yet. But it will at some point.
I've been pretty excited in that, you know, YC has been operating back in person in San Francisco for some time, but we got a real live demo day all the way back. So no more Zoom demo days, no more Zoom alumni demo day. You know, we did alumni demo day right here in this office, right, you know, right downstairs. That was awesome. And then we took over the Masonic Center and 1,200 investors came.
all in one room. It was actually really great for the founders, I thought, because it was about a third as many founders than the summer batch. And it was more than 2x, maybe 3x the number of investors who had came to our investor reception party. So it was like a ratio of 10 investors for one company, roughly. Yeah.
So I think all of them had a really good time. I'd almost forgotten how great the energy of an in-person demo day is. It's just not something that you can replicate over Zoom. The YC demo days also always acted as the de facto investor reunion in Zoom.
Silicon Valley because it's the one event that all the investors would reliably show up at. And so they were really excited that we had brought it back because when we weren't doing it, there was no equivalent event. Sort of the homecoming for Silicon Valley. Yeah. So now it's four times a year and it's the one time that all the top early stage investors in the world are going to come back to San Francisco for hopefully that week's festivities and
culminating in our demo day. So it's a real celebration. MARK MANDEL: It feels like in-person, in general, is back. That's certainly another theme of 2024. Certainly the late stage startups that we've been meeting with and speaking to this year, one of the highest priority items has been figuring out how to get everyone back into person, back into the office. I think the era of it's going to be remote forever is definitely gone. I certainly think-- MARK MANDEL: Good riddance. MARK MANDEL: Yeah, exactly right. And then find like, yeah, in-person is back, and then San Francisco is back.
A lot of thanks to you, Gary. The elections recently seem to have gone well. There's a lot of optimism, I feel, around San Francisco. Yeah, we have a new mayor. We're hoping that he does the right things. And, you know, we have a very thin, moderate majority on the Board of Supervisors, but we did get rid of some of the worst people.
Who created a doom loop in San Francisco? So I'm optimistic, you know, we didn't get everything we wanted, but it's tracking in the right direction. And I think as in startups, as in politics, you always, you know, way overestimate what you get, you will get done in one year, but you always way underestimate what's going to happen in 10 years. I think it's going to take 10 years. It's going to take 20 years to
But just as startups went from 15 companies a year that could possibly make it to $100 million a year to 1,500 in any given year, knock on wood, it's a lot of work.
I think San Francisco needs to be the beacon for all the smartest people in the world. And that's actually probably the thing that I'm most hopeful for is that we can actually just keep building. So from all of us to all of you watching, happy holidays and we'll see you in the new year.