We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #226 Garry Tan: Billion-Dollar Misfits — Inside Y Combinator’s Startup Formula

#226 Garry Tan: Billion-Dollar Misfits — Inside Y Combinator’s Startup Formula

2025/4/29
logo of podcast The Knowledge Project with Shane Parrish

The Knowledge Project with Shane Parrish

AI Deep Dive Transcript
People
G
Garry Tan
S
Shane Parish
Topics
Garry Tan: 我认为Y Combinator的成功秘诀在于我们吸引了那些真正想要创造改变世界的人,我们不仅提供资金支持,更重要的是我们免费分享专业知识和经验,帮助他们克服创业过程中的各种挑战。我们看重的不是创业者的简历,而是他们过往的成果和未来创造的能力。我们的10分钟面试旨在快速评估创业者对市场的理解和沟通能力,寻找那些能够解决实际问题、有可持续竞争优势,并且具有规模化潜力的公司。我们持续学习,从成功和失败案例中吸取经验教训,不断改进筛选机制。我们专注于早期投资,投资阶段更早,投资金额更小,但成功率却很高。我们投资的许多创业者非常年轻且经验不足,他们的成功需要时间来积累和沉淀。YC的环境能够帮助创业者找到归属感,并促进他们的成功。我们致力于帮助创业者专注于产品和市场,而不是追求虚名。创业成功的关键在于对问题的真诚解决,而不是追求名利。许多伟大的创新都源于对问题的真诚解决,而非刻意追求商业成功。留在旧金山湾区的创业团队,其成为独角兽的概率会翻倍。为了促进创新,需要解决住房问题,降低房租,增加住房供应。我们评估项目的风险,但如果一个项目虽然存在风险,但潜力巨大,我们仍然会考虑投资。我们的投资决策相对独立,但会进行内部审查以确保决策的合理性。我们会在投资决策中考虑项目的社会影响,并拒绝那些可能对社会产生负面影响的项目,即使这些项目可能盈利。理想的创业者应该具备多方面的能力,而销售能力是其中非常重要的一项。大多数人的成长经历并不能为销售工作做好准备,而销售技能的提升对于创业者至关重要。YC的目标是帮助创业者改变世界观,成为更真诚、更专注的人。真诚是成功的关键因素,它体现在创业者的行动和决策中。真诚的创业者会专注于解决实际问题,而不是追求虚名。 Shane Parish: 我认为现在很多高薪软件工程师的工作并没有创造价值,他们只是维护着一些陈旧且存在明显bug的软件,这反映了资源被浪费的现状。在AI时代,创业公司与大型科技公司之间的竞争格局正在发生变化。云计算降低了创业门槛,使得创业公司能够以更低的成本进行运营。AI时代,技术型CEO越来越普遍。创业者需要积极参与公司运营的各个方面,避免出现权力分散和效率低下的情况。大型科技公司的人员冗余和资源浪费导致了科技进步的减缓。目前AI领域的监管环境相对公平,多个实验室之间的竞争促进了市场发展。AI监管应该避免扼杀创新,并确保消费者拥有更多选择。目前AI系统还缺乏自主意识和行动能力,这让人们不必过度担忧AI的潜在风险。教育体系应该培养学生的自主学习能力,而不是压制他们的创造力。AI监管需要考虑全球范围内的影响,单一国家的监管措施并不能解决所有问题。AI技术可能会加剧信息茧房效应,但同时也可能带来更多选择。AI监管应该鼓励开放系统和用户选择,避免技术垄断。很多人希望AI能够拥有自主意识和行动能力,但我对此持谨慎态度。AI技术的发展可能会对国家之间的力量平衡产生影响。目前AI领域最前沿的研究集中在测试时间计算和推理模型上。对AI的需求将持续增长,这将推动AI相关产业的发展。AI技术正在取代一些传统的工作岗位,这需要人们适应新的就业形势。AGI的定义仍然存在争议,但AI技术在许多领域已经达到了接近人类水平的程度。未来几年是AI技术发展的黄金时期。有效的提示工程需要将复杂的任务分解成更小的步骤。AI技术能够帮助人们提高工作效率,但同时也需要人们适应新的工作方式。在AI时代,持续的创新和良好的用户体验是保持竞争优势的关键。

Deep Dive

Shownotes Transcript

Translations:
中文

the world is full of problems. Like, why are people sort of retired in place, pulling down, you know, insane, by average American standards, absolutely insane salaries to build software that, you know, doesn't change, doesn't get better. You know, sometimes I sit there and I run into a bug, whether it's a Google product or an Apple product or, you know, Facebook or whatever. I'm like, this is an obvious bug.

And I know that there are teams out there, there are people getting paid millions of dollars a year to make some of the worst software. And it will never get fixed because people don't care. No one's paying attention. That's just one symptom out of a great many that is, you know, the result of basically treating people like, you know, hoarded resources. The world is full of problems. Let's go solve those things. ♪

Welcome to The Knowledge Project. I'm your host, Shane Parish. In a world where knowledge is power, this podcast is your toolkit for mastering the best of what other people have already figured out.

If you want to take your learning to the next level, consider joining our membership program at fs.blog.com. As a member, you'll get my personal reflections at the end of every episode, early access to episodes, no ads, including this, exclusive content, hand-edited transcripts, and so much more. Check out the link in the show notes for more. Today, we're pulling back the curtain on one of the most powerful forces in the tech and venture capital world, Y Combinator.

With less than a 1% acceptance rate and a track record that includes 60% of the last decade's unicorn startups, YC has shaped the startup world as we know it.

Gary Tan, president of Y Combinator, joins us to break down what separates transformative founders from the rest and why so many ambitious entrepreneurs still get it wrong. We'll explore the traits that matter the most, the numbers behind billion-dollar companies, and why earnestness often beats raw ambition.

But there's a seismic shift happening in venture capital, and AI is at the center of it. We'll dig into how artificial intelligence is reshaping startups from idea generation to regulation, and what it means for the next wave of innovation. If you're curious about Silicon Valley's secrets, the present and the future of AI, or how true innovation gets funded, this conversation is for you. It's time to listen and learn.

I want to start with what makes Y Combinator so successful. I guess I can't talk about YC without talking about Paul Graham and Jessica Livingston. I mean, it started because they're remarkable people. And, you know, Paul, when he started his company, I don't think he ever had the idea that Y

he would ever become someone who created a thing like YC. He was just trying to help people and sort of follow his own interests, I think. He just said, I know how to

make products and make software and make them in a way that people can use them. And then after he actually sold that company, VioWeb was one of the first, you know, today we have Shopify. VioWeb was sort of like the very first version of it. He actually basically created the first web browser-based program.

So he was one of the first people to hook up a web request to an actual program in Unix. Today we call it CGI bin or all these different things. But he was so early on the web that it was a new idea to make software for the web that didn't require some desktop thing that you had to use to configure the website.

And so I think he's just always been an autodidact, a really great engineer, and then just a polymath. So I think that that's what really made YC. I mean, he wrote essays, he sort of attracted all the people in the world who wanted to do the thing that he wanted to do.

And so I think Paul Graham and his essays became a shelling point for people who this new thing that could really happen in the world. And, you know, that started very early. I mean, I think it started literally with the web itself. And, you know, that's why in 2005 he was able to get

hundreds to thousands of really amazing applications from people who wanted to do what he did. And then the magic is it's only a 10-week program. I think he had...

only a dozen people in that very first program in 2005. And then out of that very first program, Sam Altman went through it. And Sam, I guess it's interesting. I mean, if you have a draw that is very profound, it will draw out of the world the people who, that speaks to those people. And so you end up needing in society these like sort of shelling points for

for certain ideas. And then the idea that someone could sit down in front of a computer and create a piece of software that a billion people could use turned out to be very contrarian and very right.

And so today I think of YC as really, it's actually software events and media. And I think you've had Naval Ravikant on before. And I think I remember distinctly Naval talking about like, those are the few forms of extreme leverage you have in the world. And so I think Y Combinator is this crazy thing. It's like,

When people realized they could start a startup, they went on Google and they searched and they found Paul's essays. And then through his essays, they found Y Combinator. And then YC started funding people like Steve Huffman, who ended up creating Reddit in that very first batch and selling that to Condé Nast. And Dropbox, then Airbnb, then...

Today, Coinbase, DoorDash, there are just so many companies that are incredible. Airbnb is this insane marketplace that houses way more people on any given night than the biggest hotel chains in the world. And it's on the one hand unimaginable, on the other hand, that's the kind of thing that you can do. You can just do things, which is wild. And so...

I think that that's why it works. We attract people who want to create those things and then we give them money. And then more importantly, I think the know-how is we give it away for free.

Go deeper on that. Yeah. Earlier just now we were chatting about this podcast setup, but we spend a lot of time writing essays and putting out content on our YouTube channels and just trying to teach people

how do you actually do this stuff? There's like a lot of mechanical knowledge about how do you incorporate or how do you raise money for the first time? And all of that is out there for free. And, you know, on the other hand, I think...

I think of doing YC, being in the program. It's a 10-week program. We make everyone come to San Francisco now. At the end of it, it culminates in people raising, you know, sort of the median raise is about a million to a million and a half bucks for, you know, sometimes teams that are two or three people, just an idea starting at, you know,

at the beginning of that. And that's the demo day? Yeah. And yeah, we have, you know, I think we have about a billion dollars a year in...

funding that comes into YC companies. And that's because the acceptance rate to get into YC is only 1%. So let me get this straight. I think I read somewhere 40,000 applications a year. Yeah, I think it's close to 70,000, 80,000 at this point. How do you filter those? Well, we ourselves use software, but we also have 13...

general partners who actually read applications and we watch the one minute video you post

And the most important thing to me is that I want us to try the products, right? Sure, we can use the resume and people's careers and where they went to school. We're not going to throw that out. It's a factor in anything. But the most important thing to me is not necessarily the biography. It's actually...

what have you built? What can you build? Go deeper on the software thing. I don't think I've heard that before. Obviously, you have to use software, but what does the software do? How does it filter? Yeah, I mean, ultimately, the best thing that we can do is actually brute force read. And on average, I think a group partner will read something like 1,000 to 1,500 applications for that cycle that they're working.

So the best thing we can do is like not, it is basically like humans trying to make decisions, you know, which is maybe a little antithetical to, you know, the broader thing right now. And now it's, you know, let's just use AI for everything. But I think that the human element is still very important.

Most mornings, I start my day with a smoothie. It's a secret recipe the kids and I call the Tom Brady. I actually shared the full recipe in episode 191 with Dr. Rhonda Patrick. One thing that hasn't changed since then, protein is a must. These days, I build my foundation around what Momentous calls the Momentous 3, protein, creatine, and omega-3s.

I take them daily because they support everything, focus, energy, recovery, and long-term health. Most people don't get enough of any of these things through diet alone. What makes Momentous different is their quality. Their whey protein isolate is grass-fed, their creatine uses CreaPure, the purest form available, and their omega-3s are sourced for maximum bioavailability. So your body actually uses what you take.

No fillers, no artificial ingredients, just what your body needs, backed by science. Head to livemomentous.com and use code KNOWLEDGEPROJECT for 35% off your first subscription. That's code KNOWLEDGEPROJECT at livemomentous.com for 35% off your first subscription. I think you're on mute. Workday starting to sound the same? I think you're on mute. Find something that sounds better for your career on LinkedIn.

With LinkedIn Job Collections, you can browse curated collections by relevant industries and benefits, like FlexPTO or hybrid workplaces, so you can find the right job for you. Get started at linkedin.com slash jobs. Finding where you fit. LinkedIn knows how.

And then at the end, you sort of like, I guess the last filter is like this 10-minute interview. So what do you ask in 10 minutes to determine if somebody is going to be part of Y Combinator? I guess the surprising thing that has worked over and over again, ultimately, is in those 10 minutes, either you learn a lot about both the founders and the market or you don't.

So we're looking for incredibly crisp communication. So I want to know, you know, what is it?

And often the first thing I ask is not just what is it, but why are you working on it? Like, I want to sort of understand where did this come from? Did you just read about it on the internet? Or a much better answer is, well, I spent a year working on this and I've got all the way to the edge of what people know about this thing. And what's cool about the biographical is that then it

invites more questions, right? It's the best interviews in 10 minutes. Like you learn about an entire market. You learn about a set of people that, you know, normally you might not ever hear of. It's like you're traveling. It's like you're traveling the idea maze with...

the people you're talking to. This is all over Zoom. And at the end of those 10 minutes, sometimes the 10 minutes becomes 15. You want to talk to people longer because that's what a great interview feels like to me. It feels like I'm a cat and I see a little yarn and I'm just pulling on the yarn. I'm just pulling on the thread because it's like there's something here. This person understands something about the world that actually makes sense to me.

And I think what we're looking for is actual signal that there's a real problem to be solved. There are people on that end who are willing to pay. And then you're working backwards. What a great startup ultimately is, is something real that people are willing to pay for that probably has durable moats that it doesn't mean that

It means that that company could actually become much bigger than... You don't want to start a restaurant, for instance, because there's infinite competition for restaurants. But you do want to start something like Airbnb that has network effects or... That can really scale. Exactly. Or in AI today, one of the more important things is, are people willing to pay for...

And today, because people are not selling software, they're increasingly actually selling intelligence. Like it or not, these are things that you could not buy before. Probably the most vulnerable things in the world today are things that you could farm out to an overseas call center. That's sort of like the low-hanging fruit today. Yeah.

Basically, how do you find things that people want and how do you actually provide it for them? And the remarkable thing is that that's why it only has to be 10 minutes.

One of the things I feel like I learned from Paul Graham interviewing alongside him so many years was that sometimes I'd go through and this person would come in, they had an incredible resume, they had a PhD or they studied under this famous person or they worked at Google or Facebook or all these really famous places. They had an impressive resume.

Or they had the credentials of someone who I felt like should be able to do it. But then they had a mess of an interview. Like we didn't get any signal from it. We didn't understand. Or like it just seemed garbled. Or at the end of it, sometimes they're asking like, oh, 10 minutes is too short. We need more time.

And one of the things I feel like I learned from Paul was that if in 10 minutes you cannot actually understand what's going on, it means the person on the other end doesn't actually understand what's going on and there isn't anything to understand, which is surprising. That's a really good point. I bet you that holds true. Do you look at people that you've been successful with that don't work out and then people that you filtered out that do become maybe successful and try to learn from that? Oh, definitely. All the time. I mean...

I think that's the trickiest thing. I think the system itself will always produce both false positives and false negatives because it is only 10 minutes. But you have the highest batting average. Y Combinator, my understanding is...

It's like 5% of the companies become billion-dollar companies. Yeah, about 2.5% end up becoming decacorns sooner or later. But that would be the highest batting average of any VC firm, maybe with Sequoia being the exception. What's interesting to me is most of the people that I know in that space are doing hundreds of hours of work per company. And you guys can't do that because you have 80,000 people applying every...

And you're still the most or at least top tier in terms of success. Yeah, I mean, what's great is I don't want to compete with Sequoia or Benchmark or Andreessen Horowitz or, you know, they're our friends, honestly. Done right, like we're much earlier than everyone else because we want to actually give them half a million dollars when they have...

just an idea or maybe they don't even know their co-founder yet. That's what makes it more incredible. It's because the batting average should be way lower based on where you're at in the stock in terms of funding. Yeah. You know what it is though? I spent seven years actually away from YC before coming back a couple years ago. So I ended up, I think, in the top 10 of the Forbes Midas list as my final year before coming back to YC.

And why haven't other people... We ask this all the time. Why haven't other people come for us? I think there are lots of people who are doing...

various things that might work. And I guess so far, people sort of lose interest or float off and go do higher status things. Working with founders when they're just right at the beginning and just an idea is actually relatively low status work because it's very high status to work with a company that is...

you know, worth 50 or 100 billion dollars now. But guess what? Like that's 10 years from now or sometimes 15 or 20 years from now. You know, it all starts out very low status and all the way in the weeds. Like you're answering sort of relatively simple questions and you're giving relatively small amounts of money.

Well, you were giving 20 at the start, right? Now you give 500? Yeah, half a million dollars today, yeah. Has that changed the ratio of success? I think some of it is... Well, we find out in 10 years. If anything, I think that the unicorn rate has gone up over time. You know, 10, 15 years ago, I think it was close to maybe 3.5% to 4%. And now we're around 5.5%. Some batches from...

Maybe 2017, 2018 are pushing 8% to 10%. Oh, wow. Some of those companies in that area, in that vintage, about 50% of companies end up raising what looks like a Series A. And then the wild thing about it is it actually takes a long time for people to get there. So I think the YC has actually flipped a lot of the...

I guess myths of venture. One of the myths of venture maybe 10, 15 years ago was that within nine months of funding a company, you will know whether or not that company was good or bad. And going back to that stat, about half of companies that go through YC will end up raising a Series A. That's much higher than any other pre-seed or seed sort of situation that I know of.

But about a quarter of those who raise the Series A, they do it in year five or later. And that's a function of like we're funding 20%.

22 year olds, 19 year olds, 24 year olds. I mean, we're funding people who are so young that sometimes they've never shipped software before. Sometimes they're fresh off of an internship. It takes three to five years to mature, to learn how to iterate on software, how to deliver really high quality software, how to manage people, how to manage people effectively, give feedback.

And so the wild thing is, I mean, sometimes it takes five years for those things to come together. In my head, and correct me if I'm wrong here, there's a bit of like misfit, geek, people have told me this won't work or won't be successful. And then when I get to Y Combinator, I'm around a whole bunch of other people who are exactly like me. Oh, yeah. For the first time in my life. And they're super ambitious. To what extent do you think that that environment just creates...

better success or better outcomes. Oh, that was definitely true for me. I mean, without that, I feel like what my... I mean, I had a good, a really great community at the end of the day. Like it was, you know, my fellow Stanford grads. But I guess the weird thing to say is that like...

being around people who are really earnestly trying to build helps your 10x more. The default startup scenario out there is not about signal, it's about the noise. Like you're playing for these other things like how much money can I raise and what high status investor like.

Some people sort of float off and they become scenesters. They're like, oh, let me try to get a lot of followers on Twitter. That's the most important thing. And then what we really try to do at YC during the batch and then afterwards and in our office hours working with companies is when we spot that kind of stuff, it's like, oh, no, no, maybe don't do that. Let's go back to product market activities.

actually building and then iterating on that, getting customers, you know, long-term retention. All of those things are the fundamentals and everything else is like the trappings of success or, and those will always feel, what's funny is like in other communities,

All of those things will always feel more present to hand and they're easier. Like you can just get it. Like you're, you know, on stage keynoting or, you know, even doing the podcast game. I feel like guilty, you know, like...

It's kind of funny. We see that in people and then often that will kill their startup. They take their eye off the ball. Angel investing, if you're a startup founder and suddenly people have heard of you and people try to add you as a scout. People kill their startups all the time by that, just by taking their eye off the ball. Go deeper on that a little bit in terms of focus and...

And how people sort of lose their way unintentionally. And then do they catch it before it starts to go off the rail? Or does it sort of just crashes and then there's no coming back from it? Yeah.

I mean, it crashes and then sometimes you have to go and do your next startup or I don't know, sometimes people just go off and become VCs after that and that's okay too. Is that the difference between somebody who wants to run a company and start a company versus somebody who wants to be seen as running a company and starting a company? I think that that's probably the biggest danger to people who want to be founders. I mean...

I think I've seen Peter Thiel talk about this. He doesn't really want people who want to start startups. From my perspective, it's certainly much better to find people who have a problem in the world that they feel like they can solve and they can use technology to solve. And that's sort of a more earnest way to look at it. And if you look at

the histories of some of the things that are the biggest in the world, they actually start like that. There are lots of interviews with Steve Jobs and Steve Wozniak saying, "I never meant to start a company or ever wanted to make money. All I wanted to do was make a computer for me and my friends." And so many, many more people kept coming to me saying, "Can you build me a computer?"

And they just, you know, like a cat, we're pulling on this thread. It's like the company was a reluctant side effect on this. In history, it seems like a lot of innovation comes from great concentration of people together, whether it's a city or the Industrial Revolution or all these things tends to be localized and then spread over the world, if I understand it correctly. Why?

Silicon Valley? Why San Francisco? And why haven't other countries been able to replicate that success inside? Well, at YC, what we hope is that people actually come to San Francisco and, you know,

We do strongly advocate that they stay, but it's no requirement. And then what we hope is that if they do leave, they end up bringing the networks and know-how and culture and frankly vibes, and they bring it back to all the other startup hubs in the world. And I think that that's some of the stuff that has actually come about. I mean, yeah.

Monzo was started by now my partner Tom Blomfield. He's a partner at YC now, but he started multiple startups and a few of them, multiple unicorns actually, and both of them are some of the biggest companies in London, for instance. So what we hope is that San Francisco becomes sort of really Athens or Rome in antiquity. Send us your best and the brightest. Ideally, you stay here. One thing we spotted is that

the teams that come to San Francisco and then stay in San Francisco or the Bay Area

they actually double their chance of becoming a unicorn. Oh, wow. So if it's one thing that you could do, it's be around people and be in the place where making something brand new is in the water. So if hypothetically, you created a new country tomorrow and you wanted to spur on innovation, what sort of policy, you got to compete with San Francisco. What sort of policies would you think about? Like, how would you think about setting that up

to attract capital, to attract the right mindset of people, to attract and retain these people. I think what I want for San Francisco, for instance, is I think the rent should be lower. And so rather than subsidizing demand, we actually need to increase supply fairly radically, actually. And that just hasn't happened. I think I was looking at it for the entire last calendar year,

I think maybe Scott Wiener had just posted this on X that literally there were no new housing starts in all of San Francisco proper for the last year. So how are we supposed to actually bring down the rents and make this place actually livable? If San Francisco is the microcosm where...

you know, people build the future. And it is sort of the siren song for, you know, 150 IQ people who are very, very ambitious and have our, you know, techno-optimistic ideology.

And it's also where they are most likely to succeed. Society and certainly America is not serving society the right way if we're getting in the way of these smart people trying to solve these problems, trying to build the future. But just continuing on the Y Combinator theme for a second, are there ideas that you've said no to, but you think they're going to be successful? They just scare you and you're like, no, that's too scary.

I mean, if it's scary but might or probably will be good, I think we want to fund them. And certainly there are things that would be bad for society but are likely to make money and...

The history is our partners are, everyone's independent. We have a process that is very predicated on if you're a general partner at YC, you pretty much can fund what you want. We run it by each other to make sure, sort of double check the thinking.

I think we're pretty aligned there. Like there are lots of examples of, you know, maybe five or six years ago, there was a rash of telehealth companies that are focused on, for instance, ADHD meds. And I distinctly remember one of our partners, Gustav Ahlstromer.

He met that team and he said, you know what? We're not going to fund these guys. It's going to make money, but I don't want to live in a world where it is that easy to get people on these drugs. They're ultimately methamphetamines and these are controlled substances and this is the wrong vibe. We did not like the vibe that we got from the founders of that company. So, you know,

I hope that YC continues that way and I think it will. Ultimately, we want people who are ultimately trying to be benevolent at least. How would you think about just the idea of spitballing if I were to come to you and be like, I'm starting a cyber weapons company? I guess some of it is like, are you only going to sell to Five Eyes? Because I really liked what MIT put out recently saying,

They were very clear. They said, you know, MIT is an institution and that institution is an American institution. And so being very clear about that, I thought was totally the right move for MIT. And, you know, I think that YC needs to be a similar, you know, an institution of similar character. I like that. What do you wish founders knew about sales coming in? Oh, how hard it is. And I mean,

Like it or not, the ideal founder is someone who has lived like 20 lifetimes and has the skills of 20 people.

And the thing is, you know, you can't get that. And so probably the first conference that we had, the first mini conference we have when we welcome the batch in is the sales mini conference. And essentially it is don't run away from the no. Spencer Skates of Amplitude has this great analogy that he told, you know,

some companies when he came by to speak recently that I've been thinking a lot about, which is sales is about, you know, having 100 boxes in front of you and maybe five or six of those boxes has a gold nugget in them. And if you haven't done sales before, you think...

I really, I'm going to gingerly, in a very gingerly way, open that first box and hope, hope, hope that, you know, I have a gold nugget. And then, you know, I don't, I almost don't want to know that there isn't a gold nugget in there. Like, I'm so afraid of rejection. It's sort of remarkable how often high school and family and, you know,

And the 10,000 hours of human training people get from their childhoods comes up in Paul Graham's essays. I always think about that because I think that most people's backgrounds just don't prepare them for sale. It's a very unnatural thing to do sales. But then the sooner that you acquire those skills, the more free you become.

And what Spencer says about those 100 boxes is instead of like being incredibly afraid of, you know, getting an F or, you know, nothing's going to happen to you. Just like flip open all 100 boxes immediately. And then, you know, you should aggressively try to get to a no. And, you know, you'd rather get a no so you can spend less time on that lead and you can get on to the next one. I mean, I think that that's like a very interesting example of

mindset shift that you can read about but you sort of need it takes a village like you sort of need to be around lots and lots of people for whom that is true that has been true and I think that

Maybe that's actually one of the reasons why YC startups are much more successful. Other people give as much money or, as you said, venture capital VC firms tend to give a lot more money. There are clones of YC right now that give twice as much money, for instance, but I don't think that they're going to see this level of success because...

They're not going to have as earnest people who become as formidable around you. Like it's actually a process. It's so interesting to me because as you're saying that there's something that strikes me about the simplicity of what you're doing. And then also like Berkshire Hathaway, you know, everybody's tried to replicate Berkshire Hathaway, but they can't. Yeah.

Because they can't maintain the simplicity, they can't maintain the focus, they can't do the secret sauce, which obviously has a lot to do with Charlie Mulgrew and Warren Buffett.

And with you guys, it has a lot to do with the founders that you attract and you can bring together. But you have billions of dollars effectively trying to replicate it. Nobody's able to do that. I think that that's really interesting. And it's not like you're doing something that's super complicated. It doesn't sound like it unless I'm missing something. It's a very simple sort of process to bring the people together. And obviously there's filtering and you guys are really good at doing that. I mean, what my hope is, I feel like

When Paul and Jessica created YC, I went through the program myself in 2008, and I came out transformed. And then that's very explicitly what I want to happen for people who go through the batch today. It isn't just like show up to a bunch of dinners and network with some people who happen to be... It's much deeper than that. I want people to come in maybe with like...

you know, the default worldview. And then I want them to come out with, um, a very radically different worldview. I want someone who is much more earnest, someone who is not necessarily trying to sort of like hack the hack they're trying to, you know, and I think this mirrors what you were saying from, you know, what, um,

you know rest in peace Charlie Munger talks about and what Warren Buffett talks about around all of these things are in the short term popularity contests but in the end all that matters is the weighing machine so you can raise your series A you can throw amazing parties and

TechCrunch can write about you, all these Twitter anons can fet you as like the next greatest thing and you could get hundreds of thousands of followers on X or whatever. But at the end of the day, you look down and did you create something of great value? Like did you with your hands and did you assemble people and capital and create something that when all is said and done,

solve some real problem, put people together, is there real enterprise value? And that's the weighing machine. And the way that YC makes money, the way that the founders make money, it's all aligned at that point. Yeah, there's a way to hack the hack. And I don't really know what the end game is on the other stuff. It's just very short term.

Whereas, you know, on a 5, 10, 15 year basis, like if you are nose to the grindstone, earnestly working on the thing, you know, you will succeed. Like I think that that's what Paul Graham's essay about being a cockroach actually is. And that's why...

25% of the people who reach some form of product market fit at YC do it in year five or later. It's like they don't quit year one, they don't quit year two. They are learning and growing. I have one other really crazy stat that I'm thinking about all the time right now. There's a VC actually, his name is Ali Tamasab, he works at Data Collective. He wrote a book called Super Founders. And I get this email from him out of the blue. He says,

Did you know that about 40% of the unicorns from the last 10 years in the world were started by multi-time serial founders? I was like, okay, that's a cool stat. Like, makes sense. Like, multi-time founders are, you know, they know a lot more. They have networks. They have access to capital. Like, that's not a surprising stat. You know, if anything, it's a little surprising that it's only 40%. Like, you would have guessed maybe that was 80%.

The thing he said after that really shocked me. He said, did you know that of those 40%, 60% of those people, the people who created unicorns the last 10 years, are YC alumni. Oh, wow. So I'm like, that's crazy. I'm really glad that YC exists now because even if YC today is basically a thing that is for first-timers.

We do have second timers apply, we do accept them, but we primarily think of the half a million dollars. It really is for people who are starting out. And it's kind of hilarious. Like I have no product right now for people who are, for my YC alums. And maybe that's okay. That's our gift to the rest of Sand Hill Road because they're the ones who are gonna be the fund returners for all of the rest of Sand Hill Road.

Would you say, in terms of personal characteristics, it sounded like determination was definitely one of the most important outside of the company or venture. What are the other personal skills or behaviors or characteristics that people have that you say you would think correlate differently?

to not only the successful first time, but second, third, fourth? Yeah. I mean, the number one thing that I want that comes to mind for me is, I mean, maybe it's even surprising because that's not a word that you might associate with Silicon Valley founders. I think of the word earnest.

so what does earnest mean like incredibly sincere i think basically what you see is what you get like you're not trying to be something else it's like authentic but like you know even humble in that respect right

Like, I'm trying to do this thing. The opposite, I mean, and it's surprising because, you know, I don't know if people associate that with Silicon Valley startups, but I see that in the founders that are the most successful and most durable. I see it in Brian Armstrong at Coinbase, like, and which is fascinating because that's definitely not the trait that you would apply to most crypto founders. And, you know, I would use...

Sam Bankman Freed is sort of the opposite of that. Like, you know, Brian Armstrong is an incredibly earnest founder who

literally read the Satoshi Nakamoto white paper and said, this is going to be the future. And let me work backwards from that future. When you talk to him, the reason why he wanted these things comes directly out of his own experience. I mean, at Airbnb, they were dealing with the financial systems of

myriad countries and it's like international just sending money from one country to another was totally fraught and totally not something that was accessible to normal people remittances this crazy scam it's insane like how many fees that people have to pay just to like

like send money home or do cross-border commerce, right? So this is something that was incredibly earnest of Brian Armstrong to do. He said, here's the thing that is broken in the world that he saw personally. I think he spent time in Buenos Aires in Argentina and he saw hyperinflation and he said, this is a technology that solves real problems that I have seen hurt people and I know that this technology can solve it.

And then after that, he's just like nose to the grindstone working backwards from that thing that he wants to create in the world. And, you know, it's no surprise to me. I mean, there were many years in there that I think our whole community were looking at, we were looking at someone like Sam Bankman Freed and just wondering like,

what's going on over there? He speed ran the sort of money power fame game to an extreme degree, so much so that he stole customer funds to do it. And that was the answer. That's anti-Earnest. That is the definition of he was a crook. He's in jail now. And my hope is that

uh people who look you know if you just look at brian armstrong versus sbf i'm hoping that you know young people listening to this right now take that to heart it's like the things that actually win you know i mean i and going back to buffett i you know i went to um their uh you know sort of conclave in omaha oh you went to the woodstock for capital yeah yeah i mean amazing and uh

I think those guys are by definition extremely earnest. I don't think it's an affectation. I think it's like legit and serious. Like those guys did everything. What is it? It's their thing, right? It's work on high class problems with high class people. I mean, that's very, very simple. You just do it the right way, right? Yeah.

And so that's what I want. I think that if YC is the shelling point for earnest, friendly, ambitious nerds to steal something from... I have a friend on Twitter who goes by Visa, VisaCon. And he has a whole book on it. I think it's called Friendly Ambitious Nerd, if you look it up. I mean...

I think that that's what YC by definition should be attracting. And, you know, Brian Armstrong is like the best, one of the best founders I've ever met and gotten the chance to work with and fund. And I think the world desperately needs more people like that where, you know, in the background, just like consistent, doing the right thing, trying to attract the right people, like, you know, chop wood, carry water, that's it.

He also took a big stand before it became popular that the workplace is like a performance place. It's not, you don't bring all of your politics and all that stuff in. But he did that at a time when it was courageous. Like it was really, he was one of the first people out of the gate. And he took so much flack for that. Yeah.

And I remember- He's vindicated now. I know, but I remember reading like his thing and I was like, oh, this is great. But like, why are we even pointing this out? You know, like, and then he got, like, I read the stuff online. I was like, this is crazy. That's the media environment, right? I thought it was interesting anyway that he came out and did that. And I think where it relates to the earnestness is-

Only somebody who's really comfortable with themselves and like trying to do good in the world could really come out and take that stand at that point in time. Yeah, that's true leadership. Yeah. What's the biggest unexpected change you've seen in building companies in the AI world?

I think the biggest thing that is increasingly true, and we're seeing a lot of examples of it in the last year, is blitzscaling for AI might not be a thing. What's blitzscaling? So I think Reid Hoffman wrote a whole book about it. It was definitely true in the time of Uber. So, you know,

That was sort of a moment when interest rates were descending. And then these sort of increasingly international marketplaces, these sort of offline to online marketplaces like Uber in cars or delivery, or you could say Instacart, DoorDash, you could throw in Lyft,

there was sort of this whole wave of, you know, sort of the top startups were marketplace startups, but also in software to this idea that, you know, scale could be used as a bludgeon, that, you know, the network effects grow, you know, sort of exponentially, and then,

because you could have access to more and more capital, whoever raised more money would have won. And I feel like that was extremely true in that era, sort of the 2010s.

And then in the 2020s, especially by, you know, we're in the mid 2020s now, I think that we are seeing incredible revenue growth with way fewer people. And that's very remarkable. We have companies basically, you know, going from zero to $6 million in revenue in six months. We have companies going from zero to $12 million a year in revenue in 12 months, right? And with...

under a dozen people, like usually five or six people. And so that's brand new. Like this is the result of large language models and intelligence on tap. And so that's a big change. Like, you know, I think we are seeing companies that in the next year or two will get 250, $100 million a year in revenue. Really with under, you know, maybe 10 people, maybe 15 people tops,

And so that was relatively rare. And my prediction would be this becomes quite common. And my hope is that's actually a really good thing. Like this is sort of the silver lining to, you know,

what has been really a decade of big tech, right? Like it's more and more centralized power. You know, what might happen here is that, you know, and what we're actively trying to do at YC is we hope that there, you know, are thousands of companies that each can make hundreds of millions to billions of dollars and give consumers an incredible amount of choice. And we hope that that will be very different than sort of this,

The opposite, I think, was increasingly true. Like we have fewer and fewer choices in operating systems, in, you know, web browsers and, you know, across the board, like just more and more concentration of power in tech. Two thoughts here. One, like,

How much do you think that cloud computing plays into that? Because now I don't have to buy $6 billion in infrastructure to be that five-person company. I can rent it based on demand. So that's enabled me not to compete on a capital basis. Yeah, that was true. That was even why Y Combinator in 2005 could exist. I remember working...

at a startup in 1999, 2000, or at like internet consulting firms. And these were like million dollar projects because you had to actually pay $100,000 or hundreds of thousands of dollars to Oracle. You had to pay hundreds of thousands of dollars to your colo to like rack real servers. So the cost of even starting a company was just huge. Yeah. I mean, I remember Jeff Bezos actually launched...

AWS at a YC startup school at Stanford campus in 2008, right when I was starting my first company. So I think, you know, cloud really opened it up and that, you know, that's part of the reason why startups could be successful. You know, you didn't need to raise five, $10 million just to rack your server, right?

And, you know, that's the other big shift. Like, I think in the past, it was very, very common to have, you know, Stanford MBAs or Harvard MBAs be the CEO. And then you would have to go get your hacker in a cage. You had to, you know, get your CTO. And, you know, there was sort of that split. And then now what we're seeing is, you know what, like,

the CEO of the majority of YC companies, they are technical. Is this the first revolution, like technological revolution, where the incumbents have a huge advantage? You know, I think they have an advantage, but it's not clear to me that they are conscious and aware and like at the wheel enough to take real advantage of it because they have too many people. Yeah.

Right. And then it's all, I mean, I think this is what founder mode is actually about. So last year we had a conference with Brian Chesky. We invited our top YC alums there. We brought Paul and Jessica back from England. And we had this one talk that wasn't even on the agenda, but I managed to text Brian.

Brian Chesky of Airbnb, and I got him to come and speak very openly and honestly in front of a crowd of about 200 of our absolute top alumni founders. And he spoke very eloquently and in a raw way about how your company ends up not quite being your own unless you are very explicit. Like, you know, I...

this is actually my company. I am actually going to have a hand and a role to play in all the different parts of this company. I'm not going to

Basically, the classic advice for management is hire the best people you possibly can and then give them as much rope as you possibly can. And then somehow that's going to result in good outcomes. And then I think in practice, and this is sort of the reaction that is turning out to create a lot of value across our community, certainly. But I think the memes are out there and it's actually changing the way people are running businesses.

It's sort of a shade of what you were saying earlier with Brian Armstrong. You can sit back and allow your executives to sort of run amok. And if the founder and the CEO does not exercise agency, then it's actually a political game.

And then you have sort of fiefdoms that are fighting it out with one another. And the leader is not there. Then you enter the situation where neither the leader nor the executives have power or control or agency. And then you have everyone's disempowered. Everyone is making the wrong choice.

You know, retention is down. You're wasting money. You have lots and lots of people who are sort of working either against each other or not working at all. And that's, you know, I think a pretty crazy dysfunction that took hold across arguably every Silicon Valley company, period. And it's still mainly in power at quite a few of those companies, actually. Though I think people are aware now.

that that's not the way to run your company. Are the bigger companies sort of like shaping up? The way that I think about this analogy is sort of like if...

I'm the young skinny kid and I'm competing against the fat bloated company. I want to run upstairs. It's going to suck for me, but it's going to suck way more for them. Right. I think this is maybe a function of, you know, blitz scaling and using capital as a bludgeon like gone wrong. You know, you can look at, you know, almost any of these companies and

They probably hired way too many people and at some point they were viewing smart people as You know maybe a hoarded resource that you know if you were playing Some sort of adversarial, you know Starcraft and you didn't want you know, the ironic thing is like they themselves were not using The resources properly either right? They just didn't want somebody else to have them exactly. I

I guess it felt like a little bit of a prisoner's dilemma because I think the result is that, you know, tech progress itself decelerated. You have like the smartest people of a generation basically retired in place, working at places that, you know, the world is actually full of problems. Like why are people sort of retired in place?

pulling down insane, by average American standards, absolutely insane salaries to build software that doesn't change, doesn't get better. I mean, sometimes I sit there and I run into a bug, whether it's a Google product or an Apple product or Facebook or whatever. I'm like, this is an obvious bug.

And I know that there are teams out there, there are people getting paid millions of dollars a year to make some of the worst software. And it will never get fixed because there's no way, like, you know, people don't care. No one's paying attention. Yeah, that's just one symptom out of a great many that is, you know, the result of...

I don't know, basically treating people like, you know, hoarded resources instead of like, you know, the world is full of problems. Let's go solve those things. When it comes to AI, the raw inputs, I guess if you think about it that way, are sort of the LLM. Then you have power, you sort of have compute, you have data. Where do you think incumbents have an advantage and where do you think startups can...

successfully compete. Yeah, I mean, we had a little bit of a scare, I think, last year with AI regulation that was potentially premature. So, you know, there was sort of a moment maybe a year or two ago, and you sort of see it in the shades of it did make it into, say, Biden's EO, these sort of

passed a certain amount of mathematical operations, like that's banned or not banned, but we require all of this extra regulation. You have to report to the state, like you better get a license. It's

That felt like the early versions of potentially regulatory capture, where they wanted to restrict open source, they wanted to restrict the number of different players. Sitting here a year after a lot of those attempts,

I feel pretty good because it feels like there are five, maybe six labs, all of whom are competing in a fair market trying to deliver models that, you know, honestly, any startup, anyone, you know, any of us could just...

you know, pick and choose. And, you know, there's no, um, monopoly danger. There's no, uh, you know, crazy pricing power that one person, one entity, uh, wields over the whole market. And so I think that that's actually really, really good. Um, I think it's a much fairer playing field today. And then I think it's interesting because it's an interesting moment. I think that, um,

Basically, there's a new Google-style oligopoly that's emerging around who provides the AI models. But because it probably won't be a monopoly, that's probably the best thing for the consumer and for actually every citizen of the world. Because you're going to have choice.

Let's go deeper on the regulation and then come back to sort of competition. How would you regulate AI or how do you think it should be regulated or do you think it should be regulated? It's a great question. I guess there are a bunch of different models that I could see happening. You know, I think

What's emerging for me is that the two things that I think the first wave of people who are really worried about AI safety, not to be flippant, but my concern is that they basically watch Terminator 2. And I'm like, I like that movie too. But right now, there's sort of that moment in the movie where they say suddenly the AI becomes...

self-aware and it becomes, you know, it takes agency, right? And I think the funny thing, at least as of today, you know, these systems are, it's just matrix math and there is no agency yet. Like there's basically, they're equivalent to incredibly smart toasters, right?

And some people are actually kind of disappointed in that. And personally, I'm very relieved. And I hope it stays that way. Because...

that means that there's still going to be a clear role for humans in the coming decades. And I think it takes the form of two very important things. One is agency. I mean, people often ask like, what should we be teaching our kids? And the ironic thing is we send them to a school system that is not designed for agency. It is literally designed to take agency away from our children.

And maybe that's a bad thing, right? Like we should be trying to find ways to give our children as much agency as possible. That's why I'm actually personally pretty pro screens and pro Minecraft and Roblox and, you know, giving children like this sort of playground where they can exercise their own agency. Have you tried Synthesis Tutor? Oh, yeah. Yeah, I'm a small personal investor in them. And

I think that we're just scratching the surface on how education will actually change, but that's a great example.

Synthesis is designed around trying to help people have, help children actively be in these games that increase instead of decrease agency. And it's crazy. So it teaches the kids math. And my understanding just from reading a little bit is El Salvador just replaced the K through five math with Synthesis Tutor and the results are astounding. Incredible. Yeah, it's way better. I mean, the kids get involved and they're obviously invested in it. Yeah.

The regulation question is really interesting, too, because it begs the question of it's a worldwide industry. And so regulating something in one country, be it the United States or another country, doesn't change what people can do in other countries. And yet you're competing on this global level. Yeah, I think the biggest question around it is, of course, I mean, the existential fear is like, where are all the jobs going to go?

And then my hope is that it's actually two things. One is like I think that robotics will play a big key role here where I think that if we can actually provide robots to people that do real work for people, you know,

that will actually change people's sort of standards of living in like fairly real ways. So I think universal basic robot is relatively important. You know, I think some of the studies coming back about UVI have not, you know, universal basic income where you just give money to people. It's just not really resulting in

a different um i think they've never read a psychology textbook i mean just going away from the economics of it people need to feel like they're part of something larger than themselves yeah and if they don't feel like they're part of larger than something like they're contributing to something they're part of a team they're they're bigger than what they are as a person uh then it leads to all these problems yeah exactly and then you know i think that

we really need to actually give everyone on the planet some real reason why this stuff is actually good for them. I think if there is only sort of a realignment without a material increase in people's day-to-day livelihoods and their quality of life, maybe we're doing something wrong, actually. And

and left to its own devices. Like, it's possible. So I don't know what the specific things are, but I think that that's what it would look like. If regulation were to come into play or there was some sort of realignment in reaction to the nature of work changing, that would be the outcome that, you know...

the majority of people, if not all people, like see the benefit in some sort of direct way. And if we don't do that, then there will be unrest. I think that that's one of the criteria. I don't have the answer, but I think that's sort of one of the things I'd be on the lookout for.

More rewards, more savings. With American Express Business Gold, earn up to $395 back in annual statement credits on eligible purchases at select shipping, food delivery, and retail subscription merchants, including the $155 Walmart Plus monthly membership credit and $240 flexible business credit. Enjoy the benefits of membership with the Amex Business Gold Card. Terms apply. Learn more at americanexpress.com slash business dash gold. Amex Business Gold Card, built for business by American Express.

Wow, that's a great offer.

At what point do you think the models start replacing the humans in terms of developing the models? So like at what point are the models doing the work of the humans in open AI right now?

And they're actually better than the humans at improving the model. Yeah, we're not there yet. So there's some evidence that synthetic data is working. And so some people believe that synthetic data is where the models are like sort of self-bootstrapping. So just to explain to people, synthetic data is when the model creates data that it trains itself on? That's right. And so...

I guess the other really big shift is actually test time compute. Like literally O1 Pro is this thing that you can pay $200 a month for. And it actually just spends more time at the sort of query level. It might come back, you know, five minutes, 10 minutes later. But it will be much more correct than sort of the, you know, predict next token version that you might get out of, you know, standard chat GPT. Yeah.

From what I can tell, that's where a lot of the wilder things might come out.

You know, level four AGI, as defined by OpenAI, is innovators. So we have, you know, lots of startups, both YC and not YC, that are trying to test that out right now. They're trying to apply the latest reasoning models from OpenAI that are about to come out, you know, like O3 and O3 Mini. And they're trying to apply them to AGI.

actually, you know, scientific and engineering use cases. So, you know, there's a cancer vaccine biotech company called Helix that did YC a great many years ago. But what they've figured out is they can actually hook up some of these models to actual wet lab tests. And, you know,

That's something that I'd be keeping track of over the next couple of years. If only by applying dollars to energy that then goes into these models, will there be real breakthroughs in biological sciences, like being able to do new processes or come to a deeper understanding of

Whether it's cancer or cancer treatment or anything in biotech, the first experiments of that sort, that's happening in the next year. Even in computer-aided design and manufacturing, there's a YC company called Camphor that is trying to apply

They actually were one of the winners of the recent YC01 hackathon we hosted with OpenAI. And their winning entry was literally hooking up 01 to airfoil design. So being able to increase the sort of lift ratio just by applying, you know, spend more time thinking about this 01 hackathon.

and it's able to create a better and better airfoil given a certain number of constraints.

So, you know, obviously these are like relatively early in toy examples, but I think it's a real sort of optimistic point around how do we increase the standard of living and push out like sort of the light cone of all human knowledge, right? Like that is like a fundamental good for AI. You know, between that and...

the inroads it might make in education. These are like some real white pill things that I think are going to happen over the next 10 years. And these are the ways that AI becomes not sort of Terminator 2, but instead like

sort of the age of intelligence, as Sam pointed out in a recent essay. I think that if we can create abundance, if we can increase the amount of knowledge and know-how and science and technology in the world that solves real problems, and I don't think it's going to happen on its own. Each of these examples are...

frankly, a YC startup, like right there on the edge, trying to take these models and then apply them to domains that, you know, it's kind of like,

Google probably could have done what Airbnb did, but it didn't because Google's Google, right? And so in the same way, I think that whether it's OpenAI or Anthropic or Meta's lab or DeepSeek or some other lab that wins, I think that we're gonna have a bunch of different labs and they're gonna serve a certain role like pushing forward human knowledge that way. And then my white pill version of what the world I wanna live in is one where, you know,

or really any kid with agency can get access to a world-class education, can get all the way to the edge of, you know, what humans know about and are able to do or are able to like sort of affect. And then, you know, sort of empowered by these agents, empowered by chat GPT or perplexity or, you know, whatever agent, you know, it's going to look like her from the movie, right? Like we're going to have these people

you know, basically super intelligent entities that we talk to, I'm hoping that they don't have that much agency, you know? I'm hoping that actually they are just like sort of these inert entities that are your helpers. And if that's true, like that's actually a great scenario to be in. You know, that's the future I want to be in. Like I don't want to be, I don't think anyone wants to be sort of,

you know, to borrow a term from Venkatesh Rao, like, I don't think any of us want to be under the API line of, you know, these AIs, right? Like, and I think that really passes through agency.

The minute a robot can do laundry, I'll be the first customer. Yeah, there are YC companies and many startups out there that are actively trying to build that right now. My intuition is that it strikes me as immediate progress could come from just ingesting all of the academic papers that have been done on a certain topic and either disproving ones that people think are still correct and thus cutting off resources

research on top of something that's not likely to lead to anything or making connections because nobody can read all these papers and make the connections and make maybe the next leap, right? Like not the quantum leap, but like the next logical step that who's doing that. I mean, that's inevitable. And then someone listening here might want to do it. And then in which case they should apply to YC. And maybe you should, we should do a joint request for startup for this next YC batch. I like it. I want equity there. All right. But yeah,

It's also interesting because then you think about that and you're like, if I'm a government and I'm funding research, that research should all be public because I want people to be able to take it, ingest it, and make connections that we haven't made yet. And it seems like a lot of that research these days is under lock and key. So you get this data advantage in the LLMs where some LLMs buy access or steal access or whatever, have access to it, and then some don't.

How do you think about that from a data access LLM quality point of view? It's a good question. I mean, yeah, it's a bit of a gray area these days. I mean, I'm not all the way in. I don't actually run an AI lab, even though I was not actually at one. You run the meta AI lab. Yeah, that's right. Not the meta AI lab. Not meta the company, but like meta as in all of them. Yeah, that's a good question. I guess...

The funniest thing, my main response to all of that around like provenance of the data itself is at some point like it feels like it actually is fair use though. I mean, that's all the way in the case law. Yeah. Well, here's another interesting twist on this then. Like, so the airfoil, they designed this new airfoil. Is that patentable? I mean...

At least in terms of like generated images, my understanding is generated images are not copyrightable. But if AI generates not only the science behind it, like we're at a point where, you know, maybe in the next couple of years, AI is doing more science than we've done. Like, is that going to be copyrightable or patentable or sort of like withheld? Or is that public access, public knowledge? Well, my intuition would say people are just going to take the outputs of

you know, these AI systems and as

As far as I know, you can submit a patent and there's not a checkbox yet that says, did you use AI as a part of it? Here's another startup idea for anybody listening that we both wanted on. Why wouldn't somebody just read all the patent filings in the U.S. and be like, make the next logical step for me and patent that, like attempt to just patent it. A one-person company could literally ingest the U.S. patent database and be like, okay.

Here's the innovation in this. What's the next quantum leap or even the next step that's patentable? Okay, automatically file and...

You're funded. Amen. I got two ideas there. I love those. I don't know. I think these are all totally open and fair game. And then I guess maybe going back to regulation, that's one of the stranger things that is happening right now. One of the pieces of discourse out there during the AI safety debates, like in the last year, for instance, are about bioterror.

And the wild thing is basically possessing instruments of creating bioweapons is already illegal. So do you really need special laws for a scenario that are already covered by laws that exist?

I mean, that's just like my sort of rhetorical question back when people are really, really worried about bioterror. You know, I think there's this funny example where AI safety think tanks were in Congress and they were sort of, you know, going to chat GPT and typing in sort of a doomsday example. And it spits out this, you know, kind of like an instruction manual on like, well, you need to do this. You'd have to acquire this. You know, here's this thing you would do in the lab, right?

And, you know, of course, like those steps are illegal. And then I think a cooler head prevailed in that, you know,

The rebuttal was someone next went to Google, entered the same thing and got exactly the same response. So, yes, I've seen Terminator 2 as well. Am I worried about it? My PDoom score is 1%. I'm not totally unworried, right? It would be a mistake to...

dismiss all worries. It would also potentially be worse to prematurely optimize

and basically make a bunch of worthless laws that slow down the rate of progress and prevent things like better cancer vaccines or better air foils or, you know, frankly, like, you know, nuclear fusion or like clean energy or better solar panels or engineering manufacturing methods that are better than what we have today. I mean,

I mean, there's so many things that technology could do. Like, why are we going to stand in the way of it until we have a very clear sense, like, that is actually what we need to do? What does scare you about AI? I mean, it's brand new, right? So the risk is always there. You know, it's so funny, though. I mean, I'm not unafraid. On the other hand, like...

you know, this principle of you can just do things still applies to computers, right? Like, if the system becomes so onerous, like...

maybe you would go and like, let's shut down the power systems. Let's shut down the data centers themselves. Like, why wouldn't people try to do that? Right. And they might do that. And, you know, I think that people try to do that every day now. Right. Before AI. Right. If it became that bad, like, you know, I'm sure there would be some sort of human solution to try to fix this. But, you know,

Just because I read about the Butlerian Jihad in the Dune series doesn't mean that I need to live like that's what's going to happen. So you don't believe there's going to be one winner that dominates, like OpenAI or Anthropic or... It might still happen, right? You know, I think that there are lots of reasons why it won't happen right now, but, you know, who's to say? Everything is moving so quickly. Like, I think that, you know, these questions are the right questions to ask. I just don't have the answers to them. Like...

I know, but you're the person to ask. It's like asking, I guess, will Windows or Mac win? We're just literally living through that time where very, very smart people are fighting over the marbles right now. Totally. And then to me, though, working backwards, the best scenario is actually one where we have lots of marble vendors and you get choice and nobody has...

sort of too much control or cornering of all the resources. What's your read on Facebook?

almost doing a public good here and spending, you know, I think it's over 50 billion at this point and just releasing everything open source. Yeah, I think that, you know, what Zuck and Ahmad and the team over there are doing is frankly God's work. I think it's great that they're doing what they're doing and I hope they continue. What would you guess is the strategy behind that? It's kind of funny because my critique on meta would be, you know, they very...

make everyone, they put it in everyone's faces, right? Like you can't use Facebook or Instagram without, or even WhatsApp without seeing like, hey, Meta has AI now. But the funniest thing is like, I'm very surprised that they don't think about sort of like the basic principles

product part of it. Like I went to Facebook Blue app recently and I was going to Vietnam and I just wanted to say, okay, Meta AI, you're so smart. Tell me my friends in Vietnam. And it didn't know anything about me. I'm like, this is some basic rag stuff. Like I get it. Like you're already spending billions of dollars on training these things. How about like, you know, spend a little bit of money on like the most basic type of, you

retrieval augmented generation for me and my... They're just sort of sprinkling it in and it's a little bit of a checkbox. So I'm a little bit mystified, right? If they were very unified about it, I would really get it, right? Clearly, the way that we're going to interface with computers is totally going to change. What Anthropic is doing with computer use is...

You know, I think that, you know, what I've heard is basically every major lab is probably going to need to release something like that, whether it's an API the way Anthropic has or literally built into this, you know,

the runtime that you run on your computer. There's going to be a layer of intelligence. You can sort of see the shade of the very, very dumb version of it from Apple and Apple intelligence. It's like sort of sprinkling in intelligence into notifications and things like that. But I think it's virtually guaranteed that the way we interface with computers will totally change in the next few years.

the rate of improvement in the models. As of today, all the smartest things that you might want to do, they're still actually things that you have to go to the cloud for, and then that opens a whole can of worms. But there's some evidence that in the frontier research of the best AI labs,

It's pretty clear that there's sort of parent models and child models. And so there's distillation happening from the frontier, very largest models with the most data and the most intelligence down into smarter and smarter tiny models. There's a claim this morning that a 1.5 billion parameter model

I think got 84% on the AI ME math test. Oh, wow. Which is like 1.5 billion parameters is like so small that it could fit on anyone's phone. Yeah. So, yeah.

And that was like DeepSeek R1 just got released this morning. So it hasn't been verified yet, but I think it's super interesting. Like we are literally day to day, week to week, learning more that these intelligent models are going to be on our desktops, in our phones. And, you know, we're right at that moment.

So is the model better? Is the LLM better? What makes that model so successful with so few parameters? Oh, I don't know. I haven't tried it yet. But some of it is you can be...

very specific about what parts of the domain you keep. Okay. And then, you know, I guess, you know, math might be one of those things that just isn't, you know, it doesn't require, you know, 1.5 trillion parameters. It takes 1.5 billion to do an 84% job of it, which is pretty wild. Yeah.

I mean, that's another weird thing of AI regulation. You know, I think Biden, for instance, his last EO was sort of this export ban. And DeepSeek is a Chinese company releasing these models open source. And I believe that they only have access to last generation NVIDIA chips.

And so, you know, some of it is like, why are we doing these measures that may not actually even matter? It's interesting, right? Because you think of constraint being one of the key contributors to innovation. By limiting them, you also maybe enable them to be better because now they have to work around these constraints or presumably have to work around them. I doubt they're actually sort of working around them. That sounds right. I mean, I think the awkward thing about...

AI regulation is there's something like $4 billion of money sloshing around think tanks and AI safety organizations. And, you know, someone was telling me recently, like, if you looked at on LinkedIn for some of the people in these sort of giant, the giant NGO morass of think tanks, sorry, if people are part of that and getting mad at me right now, hearing this, but, you know, there's

a lot of people who went from, you know, bioterror safety experts to like, you know, one entry right, you know, right above that in the last even six or nine months, they've become AI bioterror safety experts. And I'm not saying that's a bad thing, but it's just...

you know, very telling, right? Like anytime you have billions of dollars going into, you know, a thing, maybe prematurely, you know, people have to justify what they're doing day to day. And I get it. So many rent seekers. I want to foster an environment of more competition. Yeah.

within sort of like general safety constraints. But I don't think we're pushing up against those safety constraints to the point where it would be concerning. But we also operate in a worldwide environment where other people might not think the same way about safety that we do. And then it's almost irrelevant what we think in a world where other people aren't thinking that way and it can be used against us.

I think we're going into a very interesting moment right now with, you know, the AI czar is Sri Ram Krishnan, who, you know, used to be a general partner at Andreessen Horowitz. And I think that that's a very, very good thing. Like we want people who have the networks into people who have built things, who have built things themselves, you know, as close to that as possible. And, you know, I think that...

It is actually a real concern that the space is moving so quickly that if it takes legislation two years to make it through, that might be too slow. And so it's sort of even more important that the people who are close to the president and the people who are in the executive branch, at least in the United States, they should be able to respond quickly, whether it's through an EO or other means.

I don't know what it's like in the States, but in Canada, I was looking at the Senate the other day and I was just trying to, like, is there anybody under like 60 in the Senate kind of thing? Does anybody understand technology? Or did they all grow up in the world where, you know, Google became a thing after they were already adults? Yeah.

And it strikes me that there's a difference, you know, the pace of technology improvement versus the pace of law, but also, or regulation, but also the people that are enacting those laws don't tend to, they have a different pace as well, right? Like our kids are in a different world. Like my kids don't know what a world without AI looks like.

Neither do yours. Yeah. But we do, you know, because we're similar age. And then, you know, our parents have this other thing where it's like, well, we used to have landline phones and like all of these other things. And it strikes me that those people should maybe not be regulating, you know, AI. That sounds right. I mean, I think it's more profound now than ever before. I mean, the other thing that's really wild to think about is, you know,

What comes to mind is that meme on the internet where there's the guy at this dance. Everyone else is dancing and they're in the corner and it's like, they don't know. If you go almost anywhere in the world,

You know, people maybe have heard of ChatGPT. They definitely haven't heard of Anthropic or Claude. Yeah. You know, it just hasn't touched their lives yet. And then meanwhile, like the first thing they do is they look at their smartphone and they're using Google and, you know, they're addicted to TikTok and things like that. Do you think we get to a point where, and this is very like Ender's Game, if I remember correctly, in the movie where, you know,

You pull up an article on a major news site, I pull up an article on a major news site. And at the base, it's sort of like the same article, but now it's catered to you and catered to me based on our political leanings or what we've clicked on or what we watched before. Well, my hope is that there's such a flowering of choice that...

you know, it's going to be your choice, actually. I mean, the difficulty is like, well, then you have a filter bubble, but that exists today with social media today. Okay, so here's a white pill that I don't know if it's going to happen, but I hope it happens.

You know, one of the reasons why it's so opaque today is literally that, you know, X has, you know, or X or, you know, before it was called Twitter and Twitter had, you know, thousands of people working at that place. And, yeah.

You needed thousands of people, maybe, right? Or I guess the tricky thing is like Elon came in and quickly asked like 80 or 90% of the people and it turns out you didn't need 80 or 90% of the people. So that's like another form of founder mode taking hold. But like it or not, I can't go into...

uh twitter today and tool around with my for you like my for you is written for me right it's in some server someplace and there's a whole infrastructure thing yeah but i don't control it but it's conceivable um you know today with code gen you know today engineers are basically you know writing code about five or ten x faster than they would before um

And that sort of capability is only getting faster and better. It's sort of conceivable that you should be able to just write your own algorithm and maybe you'll be able to run it on your own and you'll want choice. And so the kind of regulation that I would hope for is actually open systems, right? I would want to actually write my own version of that. I don't want...

The best version of that is actually like I want to see an – I maybe want to see my 4U Algo like very plainly. And then I want to be able to see if I can convert that into the one that I want or I can choose from 20 different ones. Two ideas here. As you're mentioning that one, like your list could –

be your default. Like, I want this list to be... But the other one is, like, maybe there's just 20 parameters and you get to control those parameters. And it could be, you know, you could consider it political as one parameter from left to right. Right. You could be, like, happy-sad. Like, you could sort of filter in that way. I know that'd be super interesting. So, I mean, if regulation is coming, like, give me open systems and open choice. And that's...

you know, sort of the path towards liberty and, you know, sort of human flourishing. And then the opposite is clearly what's been happening, right? Like Apple, you know, closing off the iMessage protocol so that, you know, it's literally a moat. Like, oh no, like that person has an Android. So they're going to turn our really cool blue chat into a green chat. We don't talk to those people. Yeah, right. I know, right? I mean, that's just a pure example of...

Apple, even today, still, they're opening it up a little bit more with RCS, but those are actually in reaction to the work of Jonathan Cantor and the DOJ. So there are efforts out there that are very much worth our attention around reining in big tech and reining in the ways in which these sort of subtle product decisions

only make money for big tech and they reduce choice and, you know, ultimately reduce liberty. It'd be super interesting to be able to have an advantage if you're big tech and you're a company and you come up with this, but have that advantage erode automatically over time in the sense that you might have a 12-month lead, but what you're really trying to do is foster continuous innovation. Like if you're a government and you're trying to regulate, it's like,

I don't want to give you a golden ticket. I want you to have to earn it and you can't be complacent. So you have to earn it every day. And so, yeah, maybe you have like a two-year window on this blue bubbles and then you have to open it up. But now you got to come up with the next thing. You got to push forward instead of just coasting. Like Apple really hasn't come up with a ton lately. Yeah. And then I think the reason why it's so broken is actually that government ultimately is, you know,

very manipulatable by money. And that's sort of the world we live in. Do you think that'll be different under Trump? I don't tend to get into politics here, but so many people in the administration are already incredibly wealthy. Oh, yeah. That's the hope. I mean, we're friends with a great many people who are in the administration. We're very hopeful and we're wishing them... We're hoping that really great things come back. And

You know, in full transparency, like I think I was too naive and didn't understand how anything worked in 2016. That's not what I was saying in 2016. I was fully, you know, an NPC in the system. But, you know, also that being said, I'm a San Francisco Democrat, so I really have very, very little experience.

I have very little special knowledge about how the new administration is going to run, except that I really am rooting for them. I'm hoping that they are able to be successful and to make America truly great. I am 100%, even though I didn't vote for Trump, I am 110% down for making America truly awesome. What do you believe about AI that few people would agree with you on?

It might be that point that I just gave you. I think that a lot of people are hoping that the AI becomes self-aware or have agency. And from here, the kind of world we live in will be very different if somehow the literally AI entities are given... Maybe the line is actually, will we have an AI CEO? Yeah.

Like, will we have a company that just like literally gives in to, you know, whatever the central entity says, like, that's what we're going to do. Every problem, you know, it's sort of the exact extreme opposite of founder mode. It's like AI mode. Like, will we live in a world in the future where, you know, corporations decide like, you know what, a human is messy and kind of dumb and doesn't have a trillion token context window. Yeah.

and won't be able to do what we wanted to do. So we would trust in AI and an LLM-based consciousness more than a human being. I'd be worried about that. I was thinking about this last night watching the football game, actually. And I was like, why are humans still calling plays? Yes, for coaching, but calling plays in the game

And AI, I feel like at this point with like O1 Pro or something, we'd be ahead of where we are as humans. I'm wondering if teams should try that. That'd be super interesting. Oh, that's going to be the next level of Moneyball then. We'll just try it in preseason, right? Or try it in a regular season game, either.

I don't know, but it strikes me that they would know who's on the field, who's moving slower than normal. Like all these, a million more variables than we can even comprehend or compute or end historical data. You know, the last 16 weeks this team has played, you know, when you run to the right after they just subbed or something, like they can see these correlations that we would never pick up on. Not causation, but correlation. It'd be super fascinating. Yeah, I mean...

What's funny about it is I think in those sort of scenarios, you might just see a crazy speed up because of human effects. I mean, when you look at organizations and how they make decisions, so many of them, you know, there's sort of like a Straussian reading of them.

There's sort of like at the surface level, you're like, I want to do X. But like right below that is actually something that is not about X. You know, for a corporation, it has to be like we have a fiduciary duty to our shareholders and we need to maximize profit, for instance. And then right below that, you know, corporations or, you know, entities of any set of people, like they do all sorts of things not for reason X on the top. It's actually like, oh, actually...

The people who are really in power don't like that person or they rub them the wrong way. Or human. Yeah, exactly. These are extremely influenceable systems. Your idea might be best, but I'm going to disagree because it's your idea, not my idea. Right. And then I think that's why, in general, we really hate politics inside companies because...

it sort of works against the collective. Do you think we'd ever see a city, like a mayor, then first before even a CEO as like an AI mayor?

You know, I guess like now that we're sitting here thinking about it, it's like sort of conceivable. But, you know, in sort of all of these cases, I would much rather there be a real human being. Kind of like a plane, right? Like we want a physical pilot, even though the plane is probably better off by itself. Yeah, that's right. And that might be what ends up happening. Like even if 90% of the time you're using the autopilot, like you always need a human in the loop. Yeah.

And, you know, I'd be curious if that turns out to be one of the things that society learns.

One of the crazier ideas I've been talking to people about that I feel like would be a fun sci-fi book would be just speculation playing out on how this interacts with nation states. China obviously is run by a central committee and arguably Xi Jinping. Seemingly, if you had ASI, you would only want the central committee to have it.

And so that might turn into like a very specific form of, you know, China might end up having one ASI that is totally centrally controlled and then everything else about it, you know, sort of comes out of that. And then you might end up with, I mean, controversially, like I think,

Often they're trying to be benevolent, right? Like if you spend time in China, it's incredibly clean. I'm sure there's all sorts of crazy stuff that happens that is quite unjust. I have no idea. It's not really even my place to argue one way or another what it's like to be in China. But that's an interesting idea. It's like...

That society probably, unless there's other changes there, you can sort of count on a single artificial super intelligence setting how everything works over there. I mean, probably internal to the Politburo itself, they're going to have to have all these discussions about what do we do with this ASI and who gets to, where does the agency, the ultimate agency of that nation come from?

Going back to something you said earlier, I think the ultimate combination, at least for right now, is human and machine intelligence working in concert, where machine intelligence might be the default and then the human opts out. And that's exercising judgment. It's like, no, we're not. And when you look at chess, that tends to be the case where the best players are using computers, but they know when, oh, there's something the computer can't see here or...

there's an opportunity that it just doesn't recognize. And I think it was Tyler Cowen who said that. He had a word for it, mixing the technologies. Fascinating. Yeah. And then, yeah, the question is like, well, how does America approach it? Like potentially it's much more laissez-faire. And then in that case, like my argument would be like the most American version of it is that like,

you and I have our own ASI and each citizen should be issued an ASI and be taught how to get the most out of it. And maybe it needs to be embodied with a robot. We should all be Superman in that sense. And that would be like the most empowering version of a society of free people created equal, right? And then there might be other versions and you're, I mean,

I'd be curious, like, you know, what's the European version of it? Maybe that version has, you know, all the check marks and like, oh, is, you know, every decision has to be, you know, was this AI assisted or not? And like, let's check the provenance on like, you know, how that AI was like trained. And I mean, I don't know, there are all these different, there's like,

billion different ways all of these different governments are going to sort of approach this technology. What are the smartest people at the leading edge of AI talking about right now? I mean, you know, the hard part is like I spend most of my time not with those people. I spend most of my time with people who are commercializing it. So...

So the very, very smartest people are clearly the people who are in the AI labs actually actively doing, sort of creating these models. But sort of the people who I know who are in those rooms, I mean, it sounds like test time compute is really it. The reasoning models are sort of the thing that will really come to bear this year. Like we're sort of understanding that right now.

For now, it sounds like pre-training might have hit some sort of scaling limit, the nature of which I don't understand yet. There's a lot of debate about it. Will there be new 4.0 style models that have more data or more compute? And seemingly, there's just rumors of

training runs gone awry that basically the scaling laws may have petered out, but I don't know. So we have sort of like the LLM, we have the reasoning, the LLM and the reasoning model are different, correct? The way OpenAI talks about O1, they're sort of connected, but like different steps. Okay. And so we have progress there. Then we have progress with the data. And then we have progress with inference. Yep.

Well, we just don't have enough GPUs, really. I think what's funny is I'm still pretty bull on NVIDIA and that they more or less have the monopoly on the best price performance. So you think this is going to continue? Well, the demand for intelligence is going to... Trillions of dollars of investments in AI. Basically, I think you can live in two different worlds. One world says...

All of this is hype. We've seen AI hype before, like it's not going to pan out. And then I think the world that we're spending a lot of time in, like the world really wants intelligence

And then the scary version of this is like, yes, some of it actually is labor displacement, right? Like in the past, what tech would do is we'd be selling you hardware. We'd be selling you a computer on every desk. Like everyone needs a smartphone. You know, we're selling you Microsoft Office. We're selling you packaged software. We're selling you Oracle, SQL Server. Like, you know, we're selling...

SaaS apps like Salesforce, it's $10,000 per seat per year, that kind of thing. Or we're selling, classically Palantir was selling million dollar or $10 million ACV, very specific vertical apps. And so all of those things are selling software or hardware and that's like selling technology.

And so increasingly what we're starting to see is like, especially the bleeding edge is probably customer support and all of the things that you would use

for a call center. Like those are sort of the things that are already so well defined and specified. And there's a whole training process for people in, you know, usually overseas to do these jobs. And AI now is just coming in and like it's, you know, the,

speech to text and text to speech, those things are indistinguishable from human beings now. And you can train these things. The evals are good. The prompting is good. You know, going back to what we were saying earlier, like what we're seeing is like, you know, like it or not, it is actually replacing labor. Great days start with great underwear and Tommy John makes the greatest.

With Tommy John, you make each day better than the last. And with over 20 million pairs sold and thousands of five-star reviews, guys everywhere love their Tommy John. Plus, you're fully covered with Tommy John's best pair you'll ever wear or it's free guaranteed. Grab 25% off your first order now at TommyJohn.com slash Spotify. Save 25% at TommyJohn.com slash Spotify. See site for details.

This episode is brought to you by SelectQuote. Life insurance can have a huge impact on our family's future. With SelectQuote, getting covered with the right policy for you is simple and affordable. SelectQuote's licensed insurance agents will tailor your experience to find a life insurance policy for your needs in as little as 15 minutes. And SelectQuote partners with carriers that provide policies for many conditions. SelectQuote. They shop.

Has anybody created an AI call center from scratch and now is ingesting customers? Yes. I mean, I funded a company in this very current batch that is called Leaping AI. They are working with

Some of the biggest wine merchants in Germany, which is fascinating. So, I mean, that's another fascinating thing. Like these things speak all human. They certainly speak all the top languages very, very well and are indistinguishable. And, you know, I think 80%.

of the ordering volume for some of their customers is entirely no human in the loop. I would love to see government call centers go to this. Yeah, exactly. It would scale so much better. I was on the hold for like three hours the other day for like a 15-minute question that I had to answer. And it's like, this could be, A, it could be done so much quicker by somebody who's not a human.

and probably more securely and reliably and more consistent regardless of who's on the other end or how they're talking. How would you define AGI? I guess the funniest thing is Microsoft, I think, is defining it when it gets its $100 billion back. But I am sort of skeptical of that because I think...

basically only Elon Musk then would, you know, qualify as a human general intelligence, I think. Um,

The thing is, in a lot of domains, it feels like it's here, actually. I mean, can it have a conversation with someone and give incredibly good wine pairing recommendations and have a perfectly fine, indistinguishable from a real human, or even better than human wine?

sort of interaction and also like take orders for very expensive wine and have that just work. Yes, like that's happening right now. Yeah. So I think in a lot of domains and this is sort of the year where like maybe there's like five or 10% of things that like it's, you know, sort of hitting the Turing test and, you know, really satisfying that. But, you know, I think maybe this is a year where it goes from like 10 to 30% and the year after that it doubles again and,

The next few years are actually the golden age of building AI. Totally. I think I'm super optimistic, at least for the next five years, about the things we'll discover, the progress we'll make, the impact we'll have on humanity, and a lot of the things that plague us. I want to get into how you use AI a little bit. What do you know about prompting that most people miss?

I mean, I'm mainly a user. You know, I spend a lot of time with people who spend a lot of time in prompts. Probably the person I would most point people to is Jake Heller. So he's the founder of Case Text. He was one of the first people to get access to GPT-4. And we think of him at YC as the first man on the moon in that he was the first to successfully commercialize GPT-4 in the legal space.

So, what he said was that they had access to GPT-3.5 and it basically hallucinated too much to be used for actual legal work. Like lawyers would see one wrong thing and say like, oh, I can't trust this. GPT-4, he found, actually...

you know, with good evals would actually, you know, give, they could program the system in a way that it would actually work. And what he says he figured out was if GPT-4 started hallucinating for them, they realized that they were doing too much work in one prompt.

They needed to take that thing that they asked GPT-4 to do and then break it up into smaller steps. And they found that they could get deterministic output for...

for GPT-4 like a human if they broke it down into steps. Oh, interesting. And what he needed to do, I mean, it's sort of equivalent to Taylor time and motion studies in factories. It feels like that's what he did for what a lawyer does.

let's say you have to put together a chronology of what happened in a case and what a real he's a real life lawyer which is sort of unusually perfect to figure out this prompting step like he realized that

He needed to look at what a real lawyer would do and literally replicate that like tailored time and motion style in the process and prompts and workflow. So for instance, doing this type of

uh summarization he would have to go through and read all the materials and then this is why apparently lawyers have you know sort of their many many different colored uh little flags and highlighters and things like that they just get very good at um you're doing a read-through paragraph by paragraph sentence by sentence and pulling out the things that are relevant and then sort of

synthesizing it. And so, you know, early versions of case texts and a lot of it today, I think is still just doing that. It's like, what is a specific thing that a human does break it down into the very specific steps that a real human would do. And then actually, basically, if it breaks, you're just asking in that step to do too many things. So like break it down into even smaller steps and

And somehow that worked. And basically, this is the blueprint that I think a lot of YCE companies and AI vertical SaaS startups are doing across the whole industry right now. They literally are taking, you know, model out what a human would do in knowledge work and then break it down into steps and then have evaluations for each of those prompts.

And then as the models get better, because you have what we call the golden evals, basically you just run the golden evals against the newest model.

4.0 comes out, Cloud 3.5 comes out, DeepSeq comes out. You have evals, which is basically a test set of prompt, context window, data, and output. And you can actually... What's funny is it's even fuzzy that way. You can even use LLMs in the evals themselves to score them and figure out...

you know, does it make sense? Can you give us an example of an eval, like make it tangible for people? Oh yeah, it's really straightforward. It's just a test case, right? So given this prompt and this data, you know, evaluate the prompt to see if, and it usually maps directly to like something that is,

you know, true, false, yes, no, like something that is pretty clear. Like, you know, let's say there's a deposition and, you know, someone makes a certain statement, right? You might have a prompt that is like, you know, is this, you know, is what this person said, you know,

in conflict with, you know, any of the other witnesses or I don't know, I'm totally making this example. This is the kind of thing that you can do, you know, at a very granular level, you might have thousands of these. And then that's how, you know, Jake Heller figured out he could create something that would, you know, basically do the work of hundreds of, you know, lawyers and paralegals. And it would take, you know, a day or an afternoon instead of, you

three months of discovery. That's fascinating. How do you use AI with your kids? Oh, I love making stories with them. So what I find is O1 Pro is actually extra good now. So yeah, actually there's like an interesting thing that's happening right now. And I saw it up close and personal this morning looking at some blog posts about DeepSeek R1, which is DeepSeek's

reasoning model. I was reading Simon Willison's blog post about he got DeepSeq R1 running. It's one of the first open source versions of sort of the reasoning. And so what we just described with how Jake Heller broke it down into chain of thoughts to make case text work, it turns out that that maps to basically how the reasoning stuff works.

And so, the difference between what Jake did with GPT-4 when it first came out and what O1 and O1 Pro maybe is doing and what DeepSeq R1 is doing clearly because it's open sourcing, you can see it, is that those steps like breaking it down into steps and the sort of metacognition of like,

whether or not it makes sense at all of those micro steps. That's what, in theory, this reasoning is actually happening. That's actually happening in the background for 01 and 03. And if you use ChatGPT, you'll see the steps, but it's like a summary of it. Right. And so I just only saw it this morning. I mean, this is such new stuff. Like I was...

hoping that someone would do a open source reasoning model just so we could see it. And that's what it was. I think Simon's blog post this morning showed, here's a prompt. And then he could actually see, I think he said pages and pages of the model talking to itself. Literally, you know, does this make sense? Like, can I break it down into steps? Yeah.

So what we just described as a totally manual action that a really good prompt engineer CEO like Jake Heller did, and he sold his company Case Text for almost half a billion dollars to Thomson Reuters.

That is actually very similar to what the model is capable of doing on its own in a reasoning model. And that's what it's doing when it's doing like test time compute. It's actually just spending more time

you know, thinking before it spits out the final answer. So how do you create a competitive advantage in a world like that where perhaps that company had an advantage for a year or two and now all of a sudden it's like built into the model for free? Yeah, I mean, I think, you know, ultimately the model itself is not the moat. Like I think that the evals themselves are the moat.

I don't have the answer yet. Basically, for now...

maybe it's a toss up if you're a very, very good prompt engineer, you will have far better golden evals and the outcomes will be much better than what O3 or DeepSeq R1 can do because it's specific to your data and it's much more in the details. I think that that remains to be seen like the classic thing that Sam Altman has told YC companies and told most startups period is you

you should count on the models getting better. So if that's true, then that might be a durable moat for this year, but it might not be past... I mean, 03, we haven't even seen yet. The results seem fairly magical. So it's possible that...

advantage goes away even as soon as this year. But all the other advantages still apply. One thing that a lot of our founders who are getting the $5 to $10 million a year in revenue with five people in a single year are saying is, yes, there's prompting, there's evals, there's a lot of magic that is sort of mind blowing. But what doesn't go away is

building a good user experience, building something that a human being who does that for a job sees that, knows that's for me, understands how to start, knows what to click on, how to get the data in. And so, you know, one of the funnier quips is that, you know, the second best

software in the world for everything is using ChatGPT because you can basically copy and paste almost any workflow or any data and it's like the general purpose thing that you can just drop data into it.

And it's the second best because the first best will be a really great UI made by a really good product designer who's a great engineer, who's a prompt engineer, who actually creates software that doesn't require copy-paste. It's just like link this, link that. Okay, now this thing is now working.

And so I think that those are the most, like the moats are not different actually at the end of the day. You still have to build good software. You still have to be able to sell. You have to retain customers. You have to...

But you just don't need like a thousand people for it anymore. You might only need six people. Okay, I want to play a game. You have 100% of your net worth. You have to invest it in three companies. Oh, God. Okay. And so the first company you have to invest half and then 30 and then 20. So altogether 100%. Which companies out of the big tech companies...

How would you allocate that between here's my biggest bet, my second biggest bet, my third from today going forward? Okay, I guess, you know, is it cheating to say I'd put even more money into the YC funds that I already run? But that's a cop out. That's a cop out. That goes without saying. I think that it's very unusual just because, you know, we end up like this is the commercialization arm of every AI lab is what I realize, right?

But short of that, I mean, maybe NVIDIA, Microsoft Meta. In that order? Probably. Why? I mean, NVIDIA just, you know, has an out and out. Like for now, they're just so far ahead of everyone else. I mean, it can't last forever. But I think that the demand for building the infrastructure for intelligence in society is going to be absolutely massive. Yeah.

And maybe on the order of the Manhattan Project, and we just haven't really thought about it enough, right? Like, it's entirely conceivable. Like, if, say, like level four innovators turns out to work, like, you know, it's sort of the meta project because then it's like,

the Manhattan Project of instantiating more Manhattan Projects, actually. Like, you know, you could imagine if we can, if more test time compute or, you know, you could do the work of, you know,

10,200 IQ Einsteins working on bringing us, you know, basically unlimited clean energy. Yeah. Like that alone will, I mean, if anything, like that's probably the bigger problem right now. Like we know that the models will continue to get better. We know that, yeah.

the demand for intelligence will be unending. And then even going back to the robotics question, it's like if we end up making universal basic robotics, the limit will still actually be sort of the climate crisis and the available energy available to human beings, right? And maybe solar can do it.

But maybe there are lots of other sort of solves. But, you know, I think energy and access to energy is sort of the defining question at that point. Like everything else you could solve, like in everything else you could sort of either, you know, if it's in the realm of science and engineering, like, you know, in theory, you

robots and more and more intelligence, we could sort of figure these things out. But not if we run out of energy. Okay, why Microsoft and why Meta next? I mean, I think Microsoft has just really, really deep access to OpenAI. And I think OpenAI is probably... You said public companies, right? Yeah, yeah. So I think...

there's a non-zero pretty large percentage of like the market cap of Microsoft that I think is pretty predicated on Sam Altman and the team at OpenAI continuing to be successful. Totally. And then why Meta? I mean, I think Meta is sort of the dark horse because like they are amassing talent and then they have crazy distribution. And I think, you know, I just would never count Zuck out. I think that he, you know, it's,

It's so crazy that it's super smart that he is on that. He's always thinking about what is...

the next version of computing. Like so much so that he probably put more money than he should have into AR and that was maybe premature. He might still end up being right there. But AI for a fraction of what he's put into AR is likely to push forward all of humanity and accelerate technological progress in a really profound way.

I want to switch subjects a little bit. A few years ago, you met with MrBeast. Oh, yeah. And talked about YouTube. What did you learn? Because your channel changed. Oh, yeah. He's great. I mean, he was very brusque with me. He said, you know, look, man, your titles suck and your thumbnails are even worse.

And, you know, I think that he spent so much time trying to understand the YouTube algorithm and what people want that he just loaded it completely into his brain. And... What makes a good title? I think it's clickbait. Unfortunately, you know, unfortunately, and this is the thing, like...

When you're trying to make smart content, it's actually kind of tricky because you don't want necessarily more clicks. You want more clicks from people who are smart. We title our episodes differently on YouTube usually than on the actual audio feed because...

if you want YouTube to pay attention, you have to almost be more provocative intentionally. That sounds right. Yeah. Like we could call this, you know, AI ends the world or something. Yeah, that's right. You know, give people to watch, but that's not actually what we're talking about at all. What makes a good thumbnail? What did you learn about thumbnails? Oh, um,

Usually, like a person looking into the camera seems to help a lot. Okay. And then you want it to be relatively recognizable. Like, you know, you want some sort of style that when someone sees it, you know, I mean, basically what I was doing at the time was just taking whatever frame that was, you know, sort of,

kind of representative and throwing it in there. But when you train someone to look at YouTube back to back to back every time it shows up, you sort of want to be highly recognizable. So you want to have a distinct thumbnail, like yours with the overlay, sort of like the red. Yeah. But once I stopped posting so regularly, then it sort of didn't matter as much anymore. But if you're going to post very regularly, that's pretty important, actually. Yeah.

So yeah, unfortunately, it's clickbait. And then there is an interesting interaction. Like, you know, yes, you can optimize for better thumbnails and better titles for the click through. But if it has absolutely nothing to do with the actual body, as you mentioned, you will not get watch time. And then YouTube will be like, oh, people aren't watching this. We're not going to promote it. Because the big thing about YouTube is discovery. Yeah.

And we notice this all the time where it's sort of like you just get this audience, but you don't get to keep the audience as a creator, which is really interesting. Well, you do if you are regular. And then the other hack is be very shameless about asking for subs. And then the funniest thing is subs do very little, actually. There's no guarantee that you show up in...

people's feeds if someone subs it like helps a little bit liking helps more watch time helps the most and then the extreme like you know over-the-top hack that you know probably you should do here is you should ask for the like subscribe and hit the bell icon

Because if you hit the bell icon and they have notifications on, that's the only thing that is almost as good as having their email address and emailing them. You heard it here, people. Gary just told you. You got to click like, subscribe, and hit the bell icon because...

You want knowledge. You want to be smart. And this is the place to get it. Oh, I love that. Thank you. Good advertising. I want to ask just a couple of random questions before we wrap up here. What are some of the lessons that you learned from Paul Graham that you sort of apply or try to keep in mind all the time? I think the number one thing that is very hard, but is so... I mean, you can see it and read it in his essays. It's to be plain spoken and to sort of...

be hyper aware of uh artifice of um kind of like bullshit basically like don't let bullshit you know i think um like it creeps in here and there i'm like oh yeah you know i um

I sometimes am in danger of like caring too much about like the number of followers I have and things like that. Whereas like actually I shouldn't be worried about that. Like what I should be worried about is, and I spend a lot of time with our YouTube team and our media team at YC talking about this. It's like, if we get too focused on just view count, we're liable to just, yeah, like optimize for the wrong audience.

if we're not being authentic to ourselves or if we're just trying to follow trends or do things that get clicks, it's like that's not helpful to them either. Then we're just on this treadmill, right? Yeah, basically like

trying to be very very high signal to noise ratio you know the thing that i probably struggle with most and you know i don't know maybe some of the listeners here might feel this it's like sometimes i think out loud and then you know really really great ideas are not like thinking out loud they're actually uh figuring out a very complex concept and then trying to say it in like as few words as possible and um

you know, the amount of time that Paul spends on his essays is fascinating. It's, you know,

sometimes days, like sometimes weeks, like he'll just, you know, iterate and iterate and send it out to people for comment. And, you know, the amount of time he spends whittling down the words and trying to like combine concepts and say the most with the least number of words, it would shock you. And then also that is actually thinking, like writing is thinking.

Like, one of the more surprising things that we do a lot of at YC is we help people spend time thinking about their two-sentence pitch. So...

You would think that that's, oh yeah, that's like something, you know, startup 101, like you're helping people with their pitch that sounds so basic. Like, yeah, I guess that makes sense. Like that's what an incubator would do. But the reason why it's very important is that it's actually almost like a mantra. It's like a special incantation. Like you believe something that nobody else believes.

And you need to be able to breathe that belief into other people. And you need to do it in as few words as possible. Like, so if you, the joke is like, oh yeah, like what's your elevator pitch? But like, you might run into someone who could be your CTO, who could introduce you to your lead investor, who could be your very best customer. And you will literally only have that time. You know, you will only have time.

time to get two sentences in. And so, and even then, I mean, I guess it's kind of fractal. Like that's what I love about a really great interview. Like, you know, someone comes in and I'm like, oh yes, I get it. Like I know what it is and I know why that's important. I know why I should spend more time with you. That's what a great two sentence pitch is. And, you know, knowing what it is, is very hard. Like that's all of Paul Graham's, um,

you know, sort of editing down and whittling down in a nutshell. It's like people do really complex things. How do you say what you do in one sentence? That's very hard actually. And then, you know, the second sentence is like, why is it important? Why is it interesting? Why should I, you know, and then that may well change with like the person that you're talking to. So yeah, to the degree that, um,

Clear communication is clear thinking. One of the things I did when I first joined YC, I had no intention of ever becoming an investor, ever being a partner, let alone running the place. I was just a designer in residence. And what I did was I did 30-minute, 45-minute office hours with companies in the YC Winter 11 batch, sitting in back then as an interaction designer. I used OmniGraffle a lot.

And so we just sat there and designed their homepage. And it's like, this is what the call to action should say. Here's, you know, put the logo here. Here's the tagline. Here's the, you know, maybe you have a video here or, you know, right below you have a how it works. And then, you know, what's funny about it is like some people, you know, would take the designs we did in those like 30, 45 minute things and like that would be their whole startup.

And I sell those companies for hundreds of millions of dollars years later, which is just like fascinating to think about. It's like clear communication, great design, you know, creating experiences for other people. All of those are sort of exercising the same skill. And so that's what a founder really is. It's like, you know, a founder to me,

is a little bit less what you might expect. It's like, oh, this is someone with a firm handshake who looks like a certain way and bends the will of the people. You might think of an SPF that's like, that's all artifice. Think about that guy. That guy was full of shakes. The guy was on meth, right? The guy was...

Everything about it was an affectation. He was a caricature of an autist. We see very autistic, incredibly smart engineers all the time. But for him, it was like that was part of the act. I remember he did a YouTube video with Nas Daily. Nasir's great and I love Nas Daily, but I couldn't believe the video that SBF went on. It was just full of basically bullshit.

And the exact opposite of Brian Armstrong. And yeah, we're always on the lookout for that. He wasn't trying to fool you. What's that? Oh, yeah, I guess so. I mean, he was fooling the world. Because you know, right? It's hard to fool somebody who knows Brian Armstrong.

versus somebody who doesn't know and he wasn't trying to appeal to you. He was trying to appeal to other people who didn't know. It's the same as going back to Buffett, just tying a few of these conversations together, right? Like everybody repeats what Buffett says, but the people who actually invest for a living or know Warren or Charlie or have spent time with them can recognize the frauds.

because they can't go a level deeper into it. They can't actually go into the weeds, whereas those guys can go from like the one inch level to the 30,000 foot level and everything in between and they don't get frustrated if you don't understand. Whereas a lot of the fraudsters, one of the tells is they can't go, they can't traverse the levels. And then they do tend to get defensive or sort of

angry with you for not understanding what they're saying, which is really interesting. And then I just want to tie the writing back to what you said. You said, if you can't get it clear in like two sentences, you might miss an opportunity. That goes to the 10 minute interview, right? Where you're looking for, maybe it's not the perfect pitch, but you want that level of clarity with people. And it's really the work of producing that message

that helps you hone in on your own ideas and discover new ideas. Yeah. I mean, I feel like we're in like the idea fire hose. So we're just like hearing about all kinds of things that are very promising. And then, um, I think the, the most unusual thing that, you know, I'm still getting used to is, uh,

I mean, in full transparency, I mean, probably, you know, the median YC startup still fails, right? Like, you know, YC is, might be one of the most successful, you know, sort of, you know, institutions of its sort that has ever existed, you know, inclusive of venture capital firms on the one hand. Yeah. On the other hand, like the failure rate is absolutely insane, right? Like, you know, it is still a very small percentage of the,

The teams actually do go on and create these companies worth $50 or $100 billion. But the remarkable thing is not that it's that low. The remarkable thing is that it happens at all.

Like, it's just unbelievable that... I think you have the coolest job in the world, or at least like one. No, I agree. If I had to pick like the top 10, like you'd be up there. I agree. I mean, it's especially to have, you know, I pinch myself every day on the regular, like in the morning, I wake up and it's like, oh, this AI thing is happening. And then somehow I'm filling the shoes of the person who, I mean, Sam Altman probably brought forward the future by, you know,

Five years? Ten years? At least ten years. Yeah. Like, all of the things that, you know, him and Greg Brockman and all the researchers he brought on, like, were working on, that happened, that was going to happen, right? Like, I think there's a lot of

The Sam Altman haters or the OpenAI haters out there love to point out, like, oh, you know what? The transformer was made by all these teams. Some of it's like, these teams absolutely did incredible things. You can't take away from that, right? The researchers did, you know, Demis did incredible things, but

At the same time, it's like they believed a thing that nobody else believed and they brought the resources to bear. And so recently, Sam Altman came back to speak at our AI conference this past weekend. And I couldn't think of another way to start that conference than have Sam Altman and a bunch of his old colleagues

We had Bob McGrew there, we had Evan Morikawa, who was the Eng Manager who released ChatGPT. Bob McGrew actually worked with me at Palantir back in the day, but he's outgoing chief research officer. Jason Kwan was there. He actually worked at YC Legal before leaving to run a lot of things at OpenAI. And so I had them all stand up.

And we had a room full of, you know, 290 founders, all of whom were working on things that happened essentially because OpenAI existed. And there was like a standing ovation. Oh, that's awesome. So, and, you know, Sam, to his credit, was like, you know, not just us. You know, these researchers did so many things as well. Yeah.

All that being said, it's like we're in the middle of the revolution. Oh, totally. I mean, it's not even the middle. I think it's like just after the first pitch of the first inning of what is about to be like...

a great, great time for humanity, for technology. I'm with you. I'm so excited to be alive right now. I'm so lucky, so blessed to be a witness to this. I think we're going to make so much progress on so many things. Go back to the haters. There's always people pulling you down, but they're never people that are in the trenches doing anything. I've rarely seen people who are working on the same problem attacking their competition like that or undermining them. No, it's...

On our end, we're just hoping to lift up the people who want to build. This is the golden age of building. Amazing. I want to just end with the same question we always ask, which is, what is success for you? I think looking back, I mean, growing up...

I always just looked up to the people who made the things that I loved. And Steve Jobs, Bill Gates, like the people who really created something from nothing. And I just think of Steve saying, we wanna put a dent in the universe.

And ultimately, that's what I want. Success to me is how do we bring forward... Paul Graham came to recruit me to come back to YC. I had actually left and started my own VC firm, got to $3 billion under management. Yeah, you guys did Coinbase. Yeah, totally. I mean, returned $650 million on that investment alone.

I was sort of right at the pinnacle of my investing as if you're running my own VC firm. And Paul and Jessica came to me and said, Gary, we need you to come back and run YC.

And it was really, really hard to walk away from that. Luckily, I had very great partners, Brett Gibson, my partner, my multi-time co-founder went through YC with me. He actually built a bunch of the software with me at YC before we left. He runs it now. They're off to the races and still doing great work. And I sat down with Paul right after we shook hands and he's like,

Gary, do you understand what this means? It means that if we do this right...

Kind of like I think what Sam did with OpenAI with pulling forward large language models and AI and bringing about AGI sooner. YC is sort of one of the defining institutions that is going to pull forward the future. And it's not more complicated than how do we get in front of optimistic, smart people who have benevolent ideas

you know, sort of goals for themselves and the people around them. How do we give them, you know, a small amount of money and a whole lot of know-how and a whole lot of access to networks and, you know, a 10-week program that hopefully reprograms them to be more formidable while simultaneously being more earnest, you know,

And then the rest sort of takes care of itself. Like, you know, this thing has never existed before like this. And it deserves to grow. Like it deserves to, you know, if we could find more people and fund them and have them be successful at even, you know, the same rate, we would do that all day. I mean, and I think what are the alternatives, right? Like I think of all the people who,

They're locked away in companies, they're locked away in academia, or heck, like these days, the wild thing about intelligence is like intelligence is on tap now, right? Like all of the impediments to fully realizing what you want to do in the world

are starting to fall away like you know there's always going to be something that stands in the way of any given person and I'm not saying like those things are equal but they you know through technology and through access to technology those things are coming down like if there's the will if there's the agency if there's the taste like

That's what I want for society. I want them to achieve that. In a lot of ways, we have more equality of opportunity now than we've ever had in the history of the world, but not equality of outcome. That's right. Yeah, and that's sort of the quandary, right? Like you have to choose. Do you want the outcomes to be equal or do you want a rising tide to raise all boats? I'm a huge fan in equal opportunity, but unequal outcome. I'm with you. Yeah.

Thank you for listening and learning with me. If you've enjoyed this episode, consider leaving a five-star rating or review. It's a small action on your part that helps us reach more curious minds. You can stay connected with Farnham Street on social media and explore more insights at fs.blog, where you'll find past episodes, our mental models, and thought-provoking articles.

While you're there, check out my book, Clear Thinking. Through engaging stories and actionable mental models, it helps you bridge the gap between intention and action. So your best decisions become your default decisions. Until next time.