Hi, I'm Jim O'Shaughnessy and welcome to Infinite Loops.
Sometimes we get caught up in what feel like infinite loops when trying to figure things out. Markets go up and down, research is presented and then refuted, and we find ourselves right back where we started. The goal of this podcast is to learn how we can reset our thinking on issues that hopefully leaves us with a better understanding as to why we think the way we think and how we might be able to change that
to avoid going in infinite loops of thought. We hope to offer our listeners a fresh perspective on a variety of issues and look at them through a multifaceted lens, including history, philosophy, art, science,
linguistics, and yes, also through quantitative analysis. And through these discussions help you not only become a better investor, but also become a more nuanced thinker. With each episode, we hope to bring you along with us as we learn together.
Thanks for joining us. Now, please enjoy this episode of Infinite Loops. Well, hello, everyone. It's Jim O'Shaughnessy with yet another Infinite Loops. I am very excited about today's guest.
Nathan Bechez, co-founder and CEO of Lex, which is an AI word processor with the editor built right in, which I think is pretty cool. Works just like a Google Sheet or Word, whatever you're used to writing on. You're also the co-founder of Every. You were the first employee at Substack and you were the co-founder of Product Hunt. Man, you get around, don't you, Nathan? Yeah.
Welcome. I do. Thank you. Thank you. So, so great to be here. All right. So I don't know if you're a fan of the movie, The Wolf of Wall Street, but I'm going to pull on you what he pulled on somebody he was thinking about putting on his team. And I'm going to say, Nathan, sell me this new pen called Lex. Oh, I love that. I guess I'll start by asking a question, which is,
Do you like to write? Do you find it hard to write sometimes? And you're kind of like, I wish I wrote more often, but sometimes it feels like I'm beating my head against the wall. And so I don't make as much progress as I want to. So I kind of fit into that Walt Whitman line, I contain multitudes. So I've written four books. I write endlessly in journals, but often I really fucking hate writing. Yeah. Yeah.
But obviously there's something, I kind of think writing is thinking. And so that's why I do all the journals. That's why whenever I,
I don't know really what I think. The solution is always write it out. You will quickly know whether, oh, I guess that idea that I had I really don't like or I don't understand fully because the minute you have to put it out into the real world, writing it,
It helps clarify very quickly whether your own thinking is muddled and needs to be improved. But, you know, obviously, as an author of that many books and given all the other writing, I must like it at some level. Yeah, totally. It's kind of like the gem, right? It's like good for us.
Can be painful, but like it feels like in the end we're glad we did it. Right. Yeah. So like maybe one way to think about Lex, if we're extending the gym anatomy or anatomy analogy, that's funny. Cause the gym is about the body anyway, you know, it's like a trainer or a spotter, right? It's someone that's there with you kind of urging you to go deeper maybe than you would have gone otherwise. And I'm personally just like, you know, whatever with AI generally and Lex specifically, um,
I'm hooked. It's like, I'm never going to write any other way again. And it's not because I think there's this huge, just giant misconception around, especially with kind of top writers about what kind of role AI could play in the writing process. And you get a lot of writers that are very, you know, skeptical or maybe even afraid a little bit. And there's a little bit of a contradiction there because like,
The skeptical part of it is like, well, this is just slop, right? And then the afraid part of it is like, well, if it's slop, what do you have to be afraid of? And so I think there's this recognition that there's something there. AI models actually are pretty helpful. They're pretty smart. But on the other hand, yeah, it's not... If you just type in, write an essay about XYZ, there's something important that's gone, right? And so Lex isn't about...
It's not a little text box where you type in, generate a paragraph about this or an essay about that or whatever, a chapter of my book about this. We don't really focus on that. We don't specialize in that. You can kind of hack the tool to do it, but it's not what we're built for, which sets us apart from a lot of other, like the kind of first thing you think of when you think of AI writing is tools that do that, right? Yep. So our thought about it is like, here's the main way I use Lex. When I'm writing, I'll highlight a sentence that I just wrote. And I'm like, did I use that idiom?
Is this the best example? Are people going to like, is that actually true? Like just fact check me a little bit, like not in the strict, it's not the same thing as like having a real fact checker that's going out and doing deep research that you can trust. Who's an expert at their job, but like for little claims that you're kind of like, eh, just give me the taste test, you know?
AI does a great job. And so it's all these little thoughts that I have that before AI would have resulted in five to 10 minutes worth of context switching and Googling and research. It just takes you out of writing, takes you out of the flow of ideas and
Now I'm in the document and I'm collaborating like it would be a Google Doc, right? In the comments of specific lines that I'm writing. And by the way, it also is great at like, hey, this transition I know is a bit clunky. Can you give me some ideas of how I can make it better? Or I feel like this word's not quite right. Like what's a better one? All those types of things
Or even higher level, more writerly things like, oh, I added this detail in the opening scene about some... I'm trying to think of a good example from a short story or whatever. I love this book, Swim in the Pond of the Rain. So I'll go with that. Why was it important that it was raining? Like,
You're swimming in a pond, like, okay, but in the rain? Like, what's important about that? And it's like, oh, it's like this carefree thing. Like, it's a totally different experience swimming in a pond in the rain versus just swimming in a pond. But those kind of details that you can have chats with AI about, I just feel like it's such a great thinking partner. And maybe one last thing I'll say on it is, because I tend to ramble when I talk about this stuff because I just find it so fascinating. It's like a new... Yeah, me too. Ramble on. Yeah.
I feel like people think about AI, in some cases, maybe the wrong way, of how to relate to the things it's telling you. Because it takes the form factor of a chatbot, it feels kind of like it's a person. And a lot of times, if a person gives you a certain type of advice, you're like, oh, maybe you're actually not the right person to be giving me advice right now. And so you're kind of like, if this was an editor that I was interviewing, maybe I'd respectfully pass or whatever on collaborating on this project, right? Because it's hard. It's really...
To feel like someone kind of gets it and gets you and gets what you're trying to do is a very rare thing. Like the default should be, no, I'm not going to let you into my creative process, right? But with AI, you know, like we want to talk about containing multitudes. I mean, the whole idea of like whatever, Shoggoth of like the, you know, pre-trained giant models before they get sort of like, you know, reinforcement learning into like this kind of comprehensible format. Like they really contain multitudes.
multitudes. And a lot depends on the exact way that you tend to prompt it. And so as sort of an, I come from, you know, an engineering background to some extent, like I'm used to kind of tinkering with systems until they give me the results that I want.
And that's not the way, I think rightfully so, that's not the way that we think about collaborating with other people. But I think if you come to AI with this mindset of like, well, I'm going to cross my arms and see how good of a job you do kind of from a distance. It's like, that's just the wrong way to approach it. I think the right way to approach it is you're digging for gold. There's going to be a lot of dirt and rocks and other stuff that you might
But if you keep poking it in different ways, you'll find certain little patterns of like, oh, this is really helpful in this context. And so I think the writers that are really adopting the AI the most are the ones who see AI that way. And just because the batting average isn't like 1,000 or whatever doesn't mean that it's not a worthwhile endeavor in your writing process. So anyway, those are some of the things. Yeah, and I completely agree on that.
In my opinion, AI is potentially the most useful innovation since, I don't know, the internet. Yeah. And the way I think about it as a tool, right? As you know, we have infinite books online.
which is a new publishing company, and we've been talking with some writers. I've been a bit surprised that some of the people that I thought were going to be like, no, that's awful, aren't saying that. But in some cases, it almost is people who are just kind of emerging, right?
And, you know, one said to me, why don't you just market it as thinking as a service? And I'm like, no, no, no, no, no. It is collaboration. Yeah. And like, it drives me crazy when, you know, when I, when I was growing up, calculators were new, right? Yeah.
And all I wanted, nerd alert, for a birthday, I think I was 10 or 11, was a calculator. And they were really expensive back in 1970 or 1971. And like I brought it into my grade school and like the teacher looked at it in horror.
Literally, it was like, where is your slide rule? And I'm like, right here. And then for a while, they banned calculators from class because that was cheating. How long did it take? Do you remember what span of time, roughly, was the band's time? Yeah, it was probably...
four years, maybe three years, until people realized, oh man, we are, that moral panic was completely misguided. And I think we're kind of seeing the same thing with schools without really thinking about it, banning AI. It's just like,
Yeah.
You could get burned by that fire and therefore you may not do it. Hey, right after we invented fire, guess what happened? We invented fire alarms, fire departments, firemen, you know, fire exits all because fire is that powerful. It is incredibly great for us. It's responsible for our prefrontal cortex, right?
Some speculate that it was only when we started cooking our food that the prefrontal cortex emerged.
Interesting. It's also dangerous, right? And I don't even like that term for AI because the way I look at it is it has improved every aspect of my life, honestly. It's like when I'm writing about a position I have on one thing, I now always, always, always steel man the opposing view.
Because, like, you know, the way we're constructed, our human OS tends to, like, you know, they won't think about it. Yes, they will. And so much better for you to have an answer or change your mind even. Like, wow, I'd never really thought about that.
How do you think, like, another thing that I hear from some writers is, you know, when I'm talking about the things we're doing at Infinite Books, I basically say, you know, AI is a mirror, not a mold. You don't have to mold your creativity to what it wants it to be. That's wrong.
It's a mirror of yourself if you use it correctly. But I have heard some people say like, well, you know, what about like really great stylists like Joan Didion or David Foster Wallace? What do you say to those people who like cling to, I have this really unique style, you know, spoiler alert, often they don't. But what would you say to the questions about David Foster Wallace, that type of writer?
I mean, I feel pretty certain that those type of writers do take some input from external sources, right? They read other people's writing. They have deep conversations with editors that they trust. They, you know, even just the process of like setting something aside and then coming back to it a day later does a lot, right? AI is just another thing to throw in that mix.
And then the end, it all has to filter through choices that that writer's making, right? And I don't love this. I think there's this kind of meme of like, oh, well, with AI, like everyone's going to be actually curators now rather than creators. And I don't think that's right. I think that people still are the creators, but creation is fundamentally about choices, right?
And choices are about what are the options? What have you actually considered? And then what are you going to go with? And to have a tool that is available, you know, within like two seconds with a flick of a couple of keys that gives you more choices that you wouldn't have thought of otherwise. There's just this like inherent, incredibly valuable thing about that. No matter if you're, you know, Joan Didion or David Foster Wallace, or, you know, talk about swimming the pond in the rain, right? Those, those like Russian greats or whatever, or, you know, the, um,
I think it's super valuable. And the other thing I'll mention on it is, again, it's about learning little tricks of how to prompt it. So if you're asking just broad, is this good? That's kind of a useless question. It'll give you some kind of interesting things or whatever. But I mean, think about the kind of
Right. That like a really experienced writer who's like kind of going deep into almost like the etymology of words to choose the exact perfect word. Like, of course, they're going to love to dig in with LLMs about the origin of what this word came from and this culture. And it was actually this collision of cultures that produced this thing in the 1600s or whatever. And so that's the word like, yeah, they're going to fiddle with that all day to produce something.
really amazing stuff. And they're still, they would, they're doing that otherwise either a, they're not doing it because the cost was too high or B they're doing it, but it's taking them way out of the flow of their writing. Right. Cause it's like, Oh, now I'm going to go spend a half day, uh,
researching, digging through all these books or whatever. There's something for sure amazing about that and you want to still have some of that, but you want to be able to choose where you invest that versus have to do it as a prerequisite of learning certain ideas that might help guide in a more interesting direction.
Yeah, as somebody who actually had to do that, right? So I'm 64. And when I was writing my first book, I had to spend endless time at the library with the microfiche going through it, going through the stacks, doing all that, like literally days and days and days. And yeah, did I find things that I wasn't looking for? Of course I did. And that was fun.
But it also ate up more of my time. And I have found that with AI, one of the things that I love about it is like even over like I don't use Google anymore. Right. I just literally because with AI, we have an internal AI system that's multimodal that we are building. But I also use some of the commercial models. Right.
And, and so the other night my wife and I were watching a TV show and a car is driving by a barn in Wisconsin and the barn's red. And I said to my wife, why are barns red? And she's like, I have no idea. And I said, neither do I. And so I asked rather than in the old days, I would have Googled it and had to wade through just so much stuff.
Now I just speak into the large language model and wire is read and, you know, you find out what you want to know. Yeah. And so I just think, though, that we also see a repetition whenever any new technology debuts. Right.
You get the moral panics, you get the clutching of the pearls. I don't know whether you're familiar with the pessimist archive website. Oh, love it. Yeah. Me too. Right. And it's like, this has gone all the way back to the novel. I mean, even Socrates, right. It said writing is a bad idea. And so the, the reticence I think goes away. And one of the things that I have thought a lot about and would like your view is
I think we're really low in the adoption S-curve, particularly with writing and AI. But are the people like us who are like whole hog going in, and I embrace what I call the senator model, right? Human plus machine.
That's where the magic happens, right? Because I kind of think of all the various large language models and other AI that we use as just a massive building filled with super eager PhD candidates, right? Yeah. And do you think that there might be a gap that gets created? Yeah.
between the early adopters and then the people who finally, finally, finally, it's like, you know, the people who didn't buy a TV, you know, for 20 years and then finally kind of threw in the towel and bought one. Do you think a gap is going to happen? I think so, but I think the gap between early adopter and late adopter is probably smaller than the gap between how you adopt it, kind of. So, like...
you know, there's this whole idea of the internet. Like some people will use YouTube, for example, to like watch Khan Academy and like, you know, like learn incredible amounts of things and like produce these extraordinary achievements. And like a lot of people will watch, you know, like kind of silly stuff. And like, you can do both. Like there's nothing really wrong with silly stuff. But like, I do think there's a bit of a like,
And this kind of scares me because I, I'm, I'm not a kind of, I have an extreme aversion. Anytime anyone tries to like, sort of say like, ah, there's like a superior class of people and like, they should just be allowed to shine. I'm kind of like, what about everybody else? They're great people too. Kind of a guy. And like, I, I tend to put a little bit more on the environment than I do about like just some sort of, and even if it's your DNA or something like that's,
In the end, that's kind of the environment too, right? You know what I mean? So I'm kind of like, I care a lot about like the sort of median or average person and not just like the peak of like human potential. But I do think, yeah, I mean, it's like you think about the people spending time with
you know, AI like friends or girlfriends or boyfriends or whatever. And, you know, there's the whole idea of like, it's just kind of like sycophantic and it creates unrealistic expectations of like how to relate with human beings and all that stuff. Like I do worry a little bit about like people who kind of use it for that rather than bootstrapping their curiosity and like going and learning a lot of things and accomplishing things they wouldn't have otherwise. Like I think it's,
There's something there and it's not new. It's similar to whatever books. Books are very old technology. There's lots of different types of books out there. But in the end, I do think that the impact of that is smaller than the people who are most loud about it say. But I do think it's a little bit real. But I think that gap is a lot. Because I think that basically people will figure out the way that AI comes into their life. And the true holdouts...
let's say you're the kind of person who is very interested in learning things and building things in the world. Even if it takes you a while to get onto AI, I think eventually you'll get there probably just because it's so... It's like cell phones now. Who doesn't have a smartphone now? Most people do. But then what do you do with that smartphone? Is it Candy Crush? Again, it could be a little bit of both or sports gambling or whatever. All these things are okay in moderation, but what
What is tricky is, yeah, just how do we sort of guide people towards a kind of, I don't know what the right, it's sort of, it's almost like an emotional thing maybe, right? Like if you think about it, like what is the source of any kind of addiction is like maybe some sort of emotional dysregulation or whatever. But to me, I think about that a lot with like, how's AI going to play out? And what are the new versions of like, you know, kind of silly videos on YouTube and like sports betting or like kind of empty games and all that kind of stuff. Like, and
How do we nudge the world towards a more kind of like do things in the real world and learn things and be useful to society kind of direction?
Yeah, and that is something that I think about a lot too. And as I am super pro-AI, I think that it is going to elevate our ability to think new thoughts, to innovate things. I just think there are endless possibilities. But I'm also very worried about the people who don't embrace it
Yeah.
One of my worries is that there may be a class of people, and I don't think it's entirely dominated by generation or age, but there may be a fairly big group of people who, and this is the part I always underline, through no fault of their own, don't get it. Don't use it for whatever reason.
And, and, and my worry there is like, it's, it's a bit like, no, I'm happy with this bicycle and we're all whizzing by them in our self-driving cars. You know what I mean? And, and like, what, what can we do to ameliorate or address the, the problems that those people might face? Yeah.
Do you think it's because maybe it's not accessible to them, like they'd like to, but they can't for whatever reason? Or do you think it's more because they don't want to? It's like a mental barrier. I think in most cases, at least in countries like the United States and advanced economies, it's because they don't want to. I think that I don't count, for example, we're really interested in Africa and we have a relationship there with
Where we have people on the ground looking to fund startups, for example. And like many of the issues have been logistics issues, right? They don't have Starlink. They don't have the ubiquitous Wi-Fi, et cetera. But man, they have the desire. They have the, like the minute it becomes available to them, right? They're going to use it.
So I guess I'm really talking about the people who have access to it already, like us. It's ubiquitous. It's everywhere. It's right on their cell phone or whatever. And they seem... It's like the idea that there's a certain class of people that I have seen who confuse ought with is. And by that, I mean like...
AI is a transformational technology, right? And then I'll hear, well, what it ought to be is, you know, fill in the blank. And they get, they're persistently unhappy with,
And maybe that is, maybe that really is just like an emotional block for them, right? It's that great line from Wilson that, you know, the happy man lives in a happy world. The sad man lives in a sad world, you know? And I often say the people who say that everything is due to luck are the unluckiest people in the world. So I definitely think that there's a strong emotional component there.
For sure. Yeah, it's kind of interesting because it does seem like the AI holdouts as of today, it seems like it's a stronger, potentially more lasting culture, subculture than smartphone holdouts or name your innovation from the recent past, like internet holdouts or PC holdouts. It does feel more likely to me that there's some kind of
version of it that's like AI sort of veganism, right? Where for some people it's about they view it as a threat to human creativity. Maybe they view it as copyright theft. There's a whole bunch of different objections. But yeah, I do think there's more of that. It's more intense and it feels like it'll probably last longer than for smartphones. It's an interesting observation for sure.
Yeah, I think you're right. And I have no problem with that. Right. It's you do you. And if that's the way you want to live life that, you know, you might have some great experiences that I'll never be able to have. Right. We have flourishing communities that reject technology like the Amish, for example. Yeah.
And, you know, they do really cool things. They build barns with just people and they all gather and it's really cool. I guess I'd rather build my barn a different way, but like, I don't care that they want to build their barn that way. One of the things that I think they're missing though, because, you know, you say, is it an emotional thing, right? Yeah. I think one of the best use cases for AI is in therapy.
Yeah.
Having the trust in being willing to express that to another human does not come easily. Whereas with AI, it's a breeze. You know, the AI is not judging you. The AI is not going to go and tell your friend, hey, you know what Jim just told me? Oh my God, can you believe it? It's not going to gossip. Maybe with the other AIs. But the point being is,
There are so many of these unlocks that are really cool. I aggressively build prompts as far as the cheerleading, right? I think that's a problem.
And, you know, so I have experimented endlessly and tinkered endlessly to get it to like really just come after me. Yeah. Like, no, no, no, no, no, no. Don't tell me like, great idea, Jim. And why don't we do it this way? It's so hard to get it to not do that. It's so fascinating. It really is, right? I wonder how much the labs have like worked on that, you know, because it makes sense to me that like,
If you tell the AI you want it to do X, Y, Z, that it's not like, you know, a moody teenager, like it should basically be willing to like help. Right. But at the same time, sometimes helping is contradicting. Yeah. Right. Absolutely. And it's so hard. I mean, maybe that is like one of the really high watermarks of intelligence is, you
Being able to understand the difference between doing what I say and then doing what I really want, which might not be what I say, kind of a thing. It's so fascinating to me just how hard it seems to be to solve that problem with AI. So one of the things that I found worked really well is I tell the AI that I am writing a novel and here are the attributes and characteristics and beliefs of my main character.
And they're really mine. Yeah. Yeah. And then I say, what's wrong with this guy? You know, what would others who don't like this person, what would they say about him? Then like, let the, let the bile flow. Yeah. Yeah. Yeah. Yeah.
That's so fascinating. I mean, that reminds me of a lot of the jailbreaks for like, you know, whatever, using AI to learn how to make a chemical weapon or whatever. It's like, hey, I'm in this video game and designing a feature where like the users have to do this thing. And I just want to make sure it's historically accurate or whatever. Like, can you tell me? And like, those are the, I mean, they don't work. They're getting better and better at refusing even in those contexts. But that's the way that you jailbreak LLMs is by
shifting the context to kind of fool it into thinking the stakes are different from what they maybe really are or whatever. So that's just an interesting parallel. Yeah. And it works really, really well. But my frustration is I shouldn't have to do that, right? Right. Yeah. I should be able to have like, you know, because one of the things that I think is going to become a premium in the world we're going into is trust.
Yes.
improve the AI. I think a lot of upgrades are actually downgrades. And it was actually one of the reasons why we did in-house AI because we want to control over that. Now, clearly we can't have control over the large language models that are commercially available in there, but we can do a lot of things that we couldn't do if we were just, you know, using a commercially available software.
And I wonder, though, it leads me kind of to another question. I know you've written about how you can use large language models to kind of be noise filters, right? And I think that's very cool. But it also makes me worry about are we really, like if we're all using our own personal reality tunnel noise filter, doesn't that increase confirmation bias and kind of we all have our own little personal Truman Show? Yeah.
Yeah, it's a fascinating question. So the post, the post you're referencing that I wrote was pretty shortly after chat GPT came out. And I just realized like you could use, you know, at the time GPT-3 as a way to just like, Hey, if I'm, if I go to hacker news or whatever, like just run a prompt on each of the articles and be like, how interesting is this to me based on my, you know, whatever, what I'm working on right now and stuff that I've maybe looked into in the past.
And kind of the cool thing about it is unlike the sort of YouTube algorithm or TikTok algorithm where it's not, you can't inspect it to understand why this is just powered by a prompt, which by the way, this is, I think one of the most important points that's so under appreciated, but to me is so important is like a lot of things that are revolutionary in the history of technology have, have taken something that was like encoded into the substrate and then made it an abstraction. So like the first calculators, like what you know, you had, um,
you know, the operations of like addition and subtraction and division or whatever were like printed on the circuit board. It wasn't running Linux, you know, like with some general purpose. And then when we abstracted away the actual program we could run on the hardware, right. And we had like, you know, turn complete, you know, general purpose computing platforms. That's what made PCs take off. And that's why they're so much more powerful than, you know, whatever previous electronic devices we had. Now they're called computers because it's the general purpose platform.
I think the same thing is the case with LLMs. Before LLMs, we had machine-learned algorithms that encoded all of this sort of intelligence into the zeros and ones, or the weights and biases of the model. But now, you can create new intelligence with a prompt. And a prompt is like sort of software to the LLMs, weights, and biases hardware, quote-unquote, even though, of course, that's just software too. But that makes it so flexible and powerful. I think it's a really cool...
way to think about why LLMs are so powerful is it's the first time we've had a way to create new intelligence that doesn't require training a new model. Yeah. Is maybe like one way to define it. Yeah. And, you know, I can't remember the author, but there's a great quote that goes along the lines of, we understand new things too quickly.
And we often by doing so misunderstand them and throw the baby out with the bathwater, right? I think your analogy with a calculator is apt, right? Like that is the way it ended.
That's the way it started, right? And it was, frankly, like they're all free now. The calculator that my parents back in 1970 probably paid $200, which would be like $2,000 today in today's dollars to give me that little calculator. Like it was very, very limited in what it could do.
But because it was new, like I thought it was the coolest thing in the world. And yet that isn't the way they develop the same. Like when we develop movies, what did we do? We film plays. Yep. Because that's what we knew. So a lot of what we do is backward looking what we, what we're used to, what we're accustomed to. And, and then it gets used in that way first, but then over time,
the real use cases develop. Now, I personally think that this, these cycles are speeding up, right? Yeah. Like we are already using like every vertical we have at OSV is an AI first vertical. Yeah. Because like that is going to be the playbook of the future. Yeah. And people who, I mean,
The one thing that frustrates me is I'll have conversations with people who are really smart, right? And I'll be like, you know, that way of doing it is probably going to go away. What do you mean it's going to go away? It's worked for us for ages, you know, blah, blah, blah. And I went, yeah, but for ages, you did not have these incredibly powerful large language models and you really need to incorporate them sooner rather than later.
And it seems to me that the most kind of mundane businesses are like, yeah, hell yeah. Like I don't have to cut and paste, cut and paste, cut and paste. I love it. But the creative arts are the ones where I see a lot of resistance and or skepticism happening.
And, you know, I try a lot of, like I said, I steel man everything and have seen a lot of good examples there. But when I first started one that I would come up with because I was involved with a company that was generative AI doing art. And, you know, the many, many artists were very, very unhappy about that. Yeah. And I would ask a question like, well, how did you learn how to paint?
Right. And almost all of them said, well, you know, I took a class or I went to the museum and I copied the masters. I went, well, that's kind of how they're training those as well. And and so I definitely understand the shock of the new. But like, I don't know. What do you think? Do you think it's just going to be time?
That sorts this all out? Or are there going to be like persistent problems as you alluded to earlier? I think time will sort a lot of it out. I really do because I think we're in a period of time where it's hard to imagine how it's going to resolve itself. But throughout history, there's a very clear pattern of when something becomes cheap and abundant, there's a new frontier of scarcity that the people shift to. And like, okay,
Has it happened to this big of a scale, this quickly, whatever in the past? Like in some ways, like it's not happening that quickly, you know, like it takes a really long time for institutions to metabolize this kind of new technology. I really liked there's interview between, um,
Tyler Cowen and Dwarkesh Patel. And Tyler Cowen was talking about how there's just so many little practical steps of friction or some committee has to wrap their head around it to get it adopted in this one place, which then if nobody adopts it in this industry, nobody else has to until someone does. These things take time. And eventually they do play out, but they take longer than a lot of technologists might intuitively feel. I think there's a lot of people right now who are in tech who kind of feel like, oh man, like,
Am I too late for AI? Or if they don't feel that way now, they'll start to feel that way in like a year, right? Like it'll, it'll happen soon. And the answer is no, this is like the internet in 95, 96, like maybe 97, but like we're not at 99 yet. I don't think. No. And like, well, no, when we're there and then, yeah, there will be maybe some, a crash that happens, but still, I mean, look at all the value that was created by the internet after that, you know, the, the Google, Facebook, Netflix, Amazon, all those types of big, um,
It takes time. It takes a really long time for the implications of the thing to work its way into the economy.
It was so funny to me how people were like, "Where are all the GPT wrappers?" It's like, "Give them a year or two." I really think massive businesses will be created on this and will look back and say, "Oh wow." Now it's a little bit more consensus because there are some really breakouts like Cursor or whatever. But Cursor had some unique properties about it. Programmers are a really ideal first market. The main competitor was open source, so they could just fork it. If Cursor had to start from scratch,
It wouldn't be adopted nearly as quickly, right? So like there's a lot of, but like in other industries, it'll happen, I believe. So anyway, it's kind of interesting to think just like intuitively, it feels like it should be faster to a lot of technologists, I think. And just the reality of human behavior is it's going to take longer. Yeah. And that reminds me that, you know, you were an intern for the Committee of Science and Tech.
of the US House of Representatives in 2009. How long will it take for, or will it ever? Will AI ever be able to unfuck government? My guess is, I would have to read a lot more to have much conviction about this, but here's my like top of my head guess. The, I mean, the classic problem with government, right, is it's a monopoly. And so usually like objects at rest tend to stay at rest
competition is a motion creating force that causes big... The only reason why Apple's doing literally anything with AI is because they feel like they have to. And they're doing the bare minimum and they're kind of screwing it up. Imagine if Apple was like... Apple's monopolist-ish, but imagine if they were on the level of monopoly of the US government. They've
So on the one hand, I'm not super – I'm a pretty progressive guy, but I'm very – I'm also like I believe in markets and I understand. You know what I mean? So I don't think that the government has tons of pressure to dramatically revamp its basic operational processes and procedures. I'm kind of – in some ways, also, I'm happy about that. They should be sort of the last adopter for a lot of types of things. You don't want them taking risks. That's the other beautiful thing about markets is –
One company can take a bold risk. If it doesn't pan out, society's not screwed because there's another company who chose a different path. There's a flip side of the coin of maybe if something truly is a monopoly, it should be more conservative in the little case C sense of the word. But I do feel like they use computers. At some point, that was a big change. But I would have to study the history of how those things...
how new technologies permeate government or whatever to understand what I think. But I mean, my guess is, you know, it's like you measure it more in like it's, it's a decade or two rather than,
a couple of years. You know what I mean? Like it is for the rest of the economy. That would be my guess. Yeah. But historically, a lot of our technological and scientific breakthroughs have come from government participation. That's true. I think of DARPA and everything they funded, obviously the classic, the Manhattan Project. Yeah. And I certainly, I agree with you about the idea that
Maybe it should take longer, right, for the government because, you know, it's people's lives we're dealing with. Right. Like, do we want AI auditors, like, sealing our fates of, like, our, you know, taxes or something? Not yet. Like, that AI auditor agent probably isn't quite ready for prime time or what, you know, even though, like, if that was a decentralized thing, markets would certainly be playing with AI auditors right now, but, like, it's not, you know, so. But then the other side of the case would be
I suspect that people will be using large language models, for example, to take any bill that is promulgated, put it into the AI, find all the pigsty stuff that they're doing for their constituents and publicize it. And I do think that on the human level, right, that people now have these incredibly powerful tools
That, you know, I don't know many people. I mean, from what I've read, even the reps don't read the bills anymore. Oh, yeah. Right? I mean, they're so long. Exactly. It's kind of obvious. These are people who live and die by meetings and text messages. They're not going to read a thousand-page bill.
piece of legislation. You know what I mean? Yeah, it's child's play. I kind of feel like they should, but it's kind of obvious that they don't. It's child's play for a large language model to do. True. And so I suspect that you're going to see quite a bit more of this kind of
social auditing, so to speak. And it's going to come from both sides. That's why I steel man all my arguments on both sides, because you're a fool if you don't think that people are going to disagree with you, right? Totally. But I suspect that there will be a lot of pressure placed through these types of activities on government and that they're at some point going to be forced to, in the
either improve or deal with it, et cetera. So I, you know, it's a, it's a bit like another question that I had for you was, you know, you've been at Substack, you've been at Every, Gimlet, now Lex. Is there anything like a really strategic insight that,
that you only saw after you left, that you, for whatever reason, couldn't see from within the organization that after stepping outside and you don't have to give names or anything, but I'm just interested in that. Cause I found that that happened to me quite a bit. And by the way, it's a prompt that I use often, like I'm leaving and you know, what do you think about X, Y, and Z? Um,
happen or never happen? Yeah. That's interesting. You know, I haven't thought about that before. I'm going to let my mind process that for a second. Sure. Well, before I answer it, it reminds me of another thing of like just tricks to trigger, like, you know, reframing your context to like help unlock ideas in your mind. The whole, obviously like Andy Grove, what if they fired us? What would we do? Kind of a thing is maybe kind of what you're referencing. Yeah. Another version of that for financial decisions is if you're holding something and you're thinking about how much to sell, um,
Imagine you just had the cash. How much do I want to buy right now? Because what you're really trying to figure out is what amount of this is the ideal amount that I want to be holding. And so if you imagine you've already sold and now you have to buy in, it's just a really good way to... In the past when I've had the opportunity to do some secondary, it's just a good... How much of my network do I want to put into this right now versus...
you know, so anyway, that, that kind of thing helps a lot. Um, but it's not exactly the same as what you asked, which is like, if you're in the company, you're in the business, what kind of things are not, or hidden from you? Maybe that became obvious after you left. Um,
To me, a lot of times what's happened is when I'm inside the company, I get excited about maybe some kind of specific idea. Like at Gimlet, there was this whole tech thing that I was really excited about that we were going to do. And I had some big ideas around it. And I don't necessarily think that those were wrong on the object level, but on the meta level of like, was Gimlet the right company to do it?
That wasn't obvious to me. And then maybe another version of that is at Substack, I was there very early. So we weren't really doing big advances or anything like that. My observation was like, listen, it's not about making the software platform 10% better. It's about attracting the kind of writers who were competing with New York publishers. And so what kind of deal is the New York publisher offering them versus our deal? If our deal is...
you can use our software. Like that's not interesting. Right. You know, whereas a book deal is like, you got a team, you get an advance, like you get, and then like they did it and it worked. But kind of what I realized is like, okay, maybe it wasn't the right when I left, like maybe they were right that it wasn't the right time for it. And like, I think I nailed it, but like,
Should we have done it right then? That would have been a bigger risk given the cash we had at the time. So I think there's some stuff about like, are we the right people to do this? Is this the right time to do this? Stuff like that's harder for me to see on the inside. And then when I take a step out, it's a little bit more like, okay, maybe on the object level that was interesting or you were right in some way, but there's a bigger picture way in which
you're actually wrong is maybe maybe something yeah it's so a temporal aspect to it that yeah that you were right but maybe too early etc right well that kind of also brings me to what idea
Do you keep coming back to, even though you've tried hard to walk away from that idea, right? Like, I don't know whether you're a movie guy, but Brokeback Mountain, one of the great lines in that is, I just can't quit you. Is there like an idea that you keep trying to like walk away from or improve or whatever? You just keep dragging you back. Hmm.
Well, there's some ideas that I have a lot of trouble letting go of, even though I haven't necessarily like, it's not like I keep doing different versions of the same thing. You know what I mean? Yeah. But like one, one idea that has really stuck with me is the, my first company is called hard bound and the line you said earlier about how, Oh, like when we get a new technology, the first thing we do is we use it to sort of recreate the previous thing.
So like movies were initially just like, we can film a play or an opera or whatever, or concert. Like that's the whole thing, right? It's like, oh no, actually movies are their own art form. You can do closeups. You can do, you know, cinematography is a whole thing. The way music works can be totally different. There's just so much about it. And then like even the type of actors and the type of stories that make sense to tell turns out to be quite different.
And then you get like TV and you get, you know, now TikTok and stuff like that. Yeah. So like a big thing with me has been like, you know, articles, the basic format of an article that you read, a piece of text that you scroll down. I mean, it's basically a newspaper article or a magazine article or something. You know what I mean? It's like kind of the same. It's just,
print, but like digitized. Yeah. You know? And like, what if they were more visual? What if they were more interactive? What if they, you know, they took advantage, they're more social in some way. Like they took the actual piece of content itself, took better advantage of the fact that you're on a screen, you're on an internet connected screen.
And the reason why it hasn't happened, I learned the hard way, is the economics of creation just don't support it. So it's one level of cost for a person to write a thing in a text box and hit publish. And it's orders of magnitude more expensive to like, okay, what are the visuals going to be? Are we building any interactions? Is someone writing code? All that stuff. Hardbound, the idea was like,
we would write, I would call them sort of like book reviews. They're a little bit in the book summary territory, but they're a little bit more added flavor and commentary from us. And the idea is we could kind of latch on to stuff that people already had demand for, because it's hard to create your own IP that people really want to read or whatever. And so we thought, okay, we can curate what type of books people are interested in and give them like
It feels like sort of an interesting short thing that gets you in the world of the book. And it's not a substitute for the book, but you get a little extra something. And there's lots of illustrations and animation and all this stuff. And it was very cool. It was great. I think it was a beautiful format. If you could get the right content in front of the right person in this format, they were like, this is insane. Of course, the future is going to be this.
And the future is still not this because it's just as economically, it was difficult to create them. Right. It's not just difficult. It's costly. Right. It took time. It took money. It took various like different types of skills and stuff. And so I do kind of wonder that I just can't quit you aspect of it is like, I mean, AI, it makes, if you want to have some interactive bit, you can vibe code it pretty quickly. If you want to have a visual, that's pretty easy now too. Like,
Like, yeah, what can we do with that? I don't think it's still probably going to be an amazing business. I don't know. I'm always very wary of anything that's sort of trying to be a consumer reading app kind of a deal. Those are just hard, right? Even though I love some of them a lot as like a personal user thing, it's just, we haven't seen a ton of, you know, even like the Instagram founders created Artifact and then, you know, shut it down. Like those are just hard businesses because it turns out people, you know,
There's like a limited market for kind of like reading, reading type experiences. Back to the like how different people use the internet in different ways and how does it affect the trajectory of their life? But yeah, that's one that I definitely can't quit. It's just thinking of like, what if we more natively designed content experiences for screens rather than starting from this print paradigm?
Yeah, that's one of the things we are experimenting with at Infinite Books, for example. In the e-book version and the audio book version, there's all sorts of things that we can do to make it a more interactive experience. And because of the tools we now have available, it's getting a lot cheaper.
And one of the things, for example, with the audio book of two thoughts, we have three interviews with Morgan Housel, Rory Sutherland, Anna Gatt. And, you know, they're an hour each and only the audio book people get those three interviews.
But, you know, the electronic version of books we're experimenting with, maybe we should have the ability to update them. How about reader comments that people could see of all the comments or questions or whatever, kind of the crowdsource aspect? I don't, honestly, I don't know which one's going to win, but we are, we are going to, we are very open to trying them all. And
and learning, right? Because I think failure is a ladder and not something to be feared because you end up asking better questions and failure is information rich, right? Yep. And I think it's really arrogant to say, you know, I know that this technology will be used this way. I mean, famous last words, right? Totally. And that's why I love markets.
Yep. But it also leads me to kind of think about things like, for example, I know we share the idea that memes, memes are like super dense information packets, in my opinion. And they are like St. Paul. If you understand meme, not in the modern sense, right, of the funny thing that you see on Twitter or social media, and you think of them as kind of these cultural transmissions that get,
In your brain and you can't get them out of your brain. And the only way to do it is to share that with other people. Right. Yeah. When you think about it, mean that way, St. Paul memed the Christianity into existence. Like totally. If you read about it, like he was the first guy to get up there and say, Hey, it doesn't matter where you were born. It doesn't matter what color you are, what sex you are, what your old gods were doing. That's all cool. You can now say,
worship this guy who's the real God. Right. Yeah. And, and so what, it wasn't him saying that to, you know, a dozen people in Galilee that, that had it spread like wildfire. It was, he was using language that were mind worms and, and the audience heard them and they just could not help but share them. Right. And,
And so I think that people underestimate the power of memes. Having said all that, what current memes do you think are distorting the way, say, a founder thinks? Oh, so many. So I literally just published...
a thing on this a couple days ago and i was just kind of reflecting on i feel like it's kind of over the past two to three years i've gotten just so much more comfortable in my own skin as a founder and i think the reason why is i shed some beliefs that weren't helping me i shed some memes right that like i i picked up from my environment and i just feel lighter now i feel i feel happier i feel like the future is more open with more possibilities
And like, you know, a couple of them are one, like, you know, there's, we have this idea of like starting a business as like, well, it should just be a project. And like the world should pull the company out of you. And like the whole Mark Zuckerberg, like Facebook was never intended to be a company. And like, you shouldn't like directly go after what you want to do. You should just do cool projects and allow the world to pull it out of you.
And I think that's totally wrong. I think if you look at the vast majority of businesses in human history, they were started by business people who wanted to start a business. Like, surprise, surprise. You know? And like, there's nothing wrong with wanting to get good at the craft of creating this economic entity that creates value in the world and create a team that gels and performs really highly and
you know, makes an impact on the world. Like all that stuff is great. It's a fantastic thing to be excited about as like a avocation, right? So like nothing wrong with that.
Another one is like, I kind of had like a sort of Disney princess romantic idea of a co-founder as like someone who could kind of in a way like complete me. Right. And the idea of like, oh, you know, like Jobs and Waz and like, you know, Larry and Surgeon, like there's all these like iconic like duos or whatever. And I realized like those are great if you have them, if it's natural.
But it's so much better if you have an idea you're excited about and there's no obvious person sitting right next to you who incepted the idea with you. Just start going. And if you pick up a co-founder along the way because it's natural, great. If you don't, also, honestly, it's totally great too. It's really great. And so I think I wasted a lot of time
Having my first step be like, well, I don't want to get too far with this idea because what I really need to do is kind of go co-founder dating and be open to working on different things because what I need most is a great partner. And then everything else flows from that. And I think that's totally wrong. I think just do the things you're excited about. Make as much progress as you can on your own, which by the way, now with AI is a lot more progress than a couple years ago.
And then, you know, you can accumulate resources and a team and everything else along the way to the extent that the thing works. And if it doesn't, then you start working on something different. Maybe then you have a natural co-founder for that. And so kind of,
Just worrying about if you have a co-founder or not, you need a co-founder. I think just go with the thing you're excited to do and see what the world pulls out of you as you go with it. And I think that's a really important one. There's a few others, but I think that's honestly the biggest one is just not having so much angst about if I have a co-founder or not. Because I realized Lex is the first time I've been a solo founder.
I'm like, I'm really happy. Like it works really, really well. And also I've enjoyed having co-founders too, but it's not like this prerequisite, right? Of like success. And also, again, if you look at history, a lot of businesses have either solo founders or founding, co-founding team that kind of one of them was clearly like,
more emotionally or whatever invested in like the clear leader. And so I think it's okay. Like the whole idea of like, oh, it's just too lonely or you can't handle the moral weight of the company on your own. Like, I just think it's a load of crock, like at least in my case, and maybe there's different personality types or whatever, but yeah. Yeah. That seems to also be a thing fairly specific to tech in like, I've founded what four companies and all of them, I founded myself, but you know, they, they weren't in, well, one became tech.
OSAM ultimately became a tech company, but I agree completely. Now flip the question as a connoisseur of memes. Are there any memes that are under the radar right now, but you think might really catch fire soon? Ooh, good question.
Well, I mean, the obvious flip side of the one I was just talking about is the idea of solo founders. And do you know Julian Weiser? I do. Yeah. Yeah. So he's got the whole solo founder club thing right now that I'm really excited about. And he's a good friend of mine who was the first investor in Lex. Yeah. So that's an obvious one. Solo founders, I think, are a meme that's kind of like, the time is ready for it.
What else? I kind of wonder about this. I don't know if it's going to take off or not, but this is a meme that I think is, it's a meta meme that I think is incredibly important. And I don't know why it's not more widely like talked about or understood or whatever. Like we all learn about biological evolution in school.
Cultural evolution, to me, should be more fascinating to us as humans. What are the forces that produce our behavior? And if you think about the way that behavior works, it's like, okay, we have observations of the world. We see the prestigious people, the people who figured out something that works. We try and understand what ideas or what beliefs created that success. We just naturally adopt those things. We copy what they do.
And then if it works, we keep going with it. We then become the inspiration to transmit that belief to the next generation. If it doesn't, maybe we get disillusioned and we start to, we lose those ideas and we create room for alternate ideas. And if you think of humans as kind of like, you know, this general purpose, you know, unit of like being able to pick up lots of different behaviors flexibly compared to other animals, which all the behavior for the most part is programmed into their DNA by instinct, then
the Joseph Henry Klein, that's the secret of our success. It's this massive, fascinating thing that's like, we spend so much time on Darwin and Mendel and all that stuff, but our behavior, what? It's crazy. I would think almost all of the social sciences, economics, anthropology, everything, it should basically be refounded on this idea of
And it's just like, why aren't more people talking about that? It's always fascinating to me because it's not like some random like pet theory. Like it's a whole branch. It's a big, important branch of science. There are smart people are like clued in on it, but I kind of wonder like, is it,
Is it going to be like a bigger thing, especially maybe, you know, with a better ability to measure because behavior happens increasingly online? Maybe we have better ways to measure cultural transmission and evolution and different populations. Anyway, it's fascinating to me. Me too. I have long been on the soapbox about cumulative cultural evolution being a massively important thing to the world in which we live.
And, you know, I don't necessarily get yawns. I get people kind of nodding saying, yeah, that does make sense. And I'm like, no, no, no, no, you really did.
You don't understand what I'm saying. It's like, if you can understand where culture is evolving earlier than some people, you're going to get a hell of a jumpstart. If you are one who likes to start companies, if you like to do that kind of stuff, you really need to understand how it happens.
And like the weirdest people in the world, it not only does it affect us, right? Culture is our operating system, so to speak. Right. And it keeps getting upgraded or downgraded. It goes both ways. Yeah. But like the idea that if you can, if you can understand that you
You also need to understand that it affects our physical bodies, right? Like highly literate people, the brain structure is different. The material brain structure is different than those who are illiterate, right? So that is evolution happening to us, not within the Darwinian evolution cycle, but through innovations that humans created like reading books, et cetera.
where, you know, the brain is incredibly flexible. And what did it do? It's like, hmm, I don't probably need to be as good at visual acuity as I used to be. And so that's the portion of the brain it colonized. And there's so many fascinating things. I mean, this is all just stuff that I learned from reading Henrik, but like our bodies are designed with a dependence on culture. Yes. So like-
Like you talked about earlier, metabolizing food, cooking food externally. We wouldn't last if we didn't discover fire. We didn't discover some way to pre-metabolize our food. The whites of our eyes, the reason why those are there is so that we can see what someone else is looking at. Because if I can observe you, Master Hunter, and where you're looking, then I'm much more easily able to pick up
the behaviors that cause you to succeed. Right. The, we're really bad at, um, I think maintaining water. Like there's a lot of other animals like antelope or whatever that are like way more water efficient. Um,
The reason why we don't have to be, and we can sort of like, we're almost like performing at like a higher gear where it's like we need more fuel more often or whatever, but we can like perform better is because it is like our sort of evolutionary environment assumed that we'll be smart enough to find water more easily than like the antelope will. So like so many things about our body is like built in with a dependence upon, upon intelligence or culture more specifically. Yeah, no, it's just fascinating to me. I just feel like if you went to kind of
most people when you're like, hey, there's this whole science of things that determines human behavior. Why is that not a bigger deal? I don't know. Maybe it's because, honestly, I think the reason why probably is because it's very hard to measure and run experiments. It's like this grand theory that works and makes sense, but I don't think there's been a lot of practical ways to sort of say, okay, so let's, I don't know, run an experiment where it's like, well, it's a culture. It's out there in the world. It's kind of like economics in that way, but I don't know. I don't
I think there's probably ways to make some progress. Seems like it. I totally agree. And it's been like, I've been fascinated by it forever. And when I discovered the weirdest people in the world and all that literature, I was like in heaven. But you're right. Because of an antiquated way
and it is another one of my hobby horses, right? We, we, we have been primed through almost all of human culture and all of human history to, to believe that the environment we are, are in is ruled by scarcity and that competition is the way to go for the most part. Like
That's evolving, right? Like the great quote, you know, drop one naked human alone in a forest and you provided an excellent meal for the animals of that forest.
Drop 100 humans in that field naked and you have created the next apex predator in that forest. Yeah. Right? Like the history of humanity is the tension and coexistence of competition and cooperation. And if you look at the things that were really massive inventions before,
a lot of that was through cooperation and through learning, you know, mimetically, right? Yeah. It's like mirror neurons, again, you wanna bring in the biology, mirror neurons have a very specific purpose, right? Yeah. Much of what we learn, if you watch the way a child learned,
Essentially, it is through copying their siblings, their parents, et cetera. But it is also through massive experimentation, right? Like, oh, why do you think kids put everything in their mouth? Because they're experimenting. Totally. Is this going to be good or bad? And so, like, I am really interested in ways that we can encourage more of that.
Not less, right? And the idea that like,
mashups, like mashups are really cool and they give, and they give birth to a lot of really new art forms, really new ideas, et cetera. Like one of the things that really guides me when I'm doing a company is I deeply believe in cognitive diversity, you know, not the diversity by color or sex or sexual preference or any of that. I don't care about that.
What I do care about is cognitive diversity, right? Because like another great quote is, no matter how brilliant somebody is, no matter how creative, no matter how insightful, you cannot ask them to make a list of things that would never occur to them.
Right. And, and so that's why you got to find that other person who, where they do occur to them, but also guess what? You can find AIs and programs personalities into those AIs. And it's like a superpower.
I was going to say something about AI there because I do feel like one of the best things, best uses of AI is just to give you options that wouldn't have occurred to you. Yes. Really specific with writing or like really general, right? With anything in life, AI as like an option generator is so fascinating. And I think probably there's a lot, we could do a lot more to be milking it. But it is interesting when you talk about cognitive diversity because it's so true. Like Fred, who's like our founding engineer and I work really, really well together because Fred is very like,
He's neat and tidy and he wants to do things the right way. And he wants to have a really high level of craft. And I'm much more like, I want to try a crazy new thing. And it's okay if it's a little bit rough and a little bit messy. We just work so well together because you need a little bit of both. If it was just me, Lex would be way too scattered and wouldn't be cohesive enough. And there'd be little...
rough edges everywhere. And if it was just Fred, maybe there'd be a little bit less of like the kind of raw energy or whatever. And it's not to say Fred doesn't have ideas. He has amazing ideas. And it's not to say I can't be tidy sometimes. I can, but it's about like relative levels of emphasis, right? And it just helps so much to have different
people with different inclinations around the table. Yeah. Totally. And it, it sounds like we're brothers from different mothers because I, I am more like you. I'm like, no, let's just try, let's try this. Let's try that. Let's okay. And I'm happy if it's messy and has some rough edges and everything else. And we've got a lucky for me, we've got a bunch of people who are like, Jim, I'm,
Please put down your pencil now and let's see whether we can actually make something of this. So what's next for Lex? We were talking before we started to record that you're kind of at version two. Talk a little bit about that. Well, it's kind of funny because I actually was thinking about the cultural evolution angle on this. And like, what's one maybe practical way that that idea plays into my life? Well, here's it. Lex, here's the way we conceptualize what our current task is.
It's to inflect the bottom of the S-curve of writers wrapping their heads around AI. Because programmers, it's like we're steep in the S, right? Writers, I think you and I both have the sense that we're at the very bottom. It's not even really started to do this. I agree. And so our problem is, how do we get writers to do this? And the cool thing about my experience is having worked at Substack, I saw a similar S of like, when I was at Substack, I mean, we had conversations all the time about if we had product market fit.
And the difference, like the advanced thing helped a ton to kickstart it. But I mean, the real thing that had to happen was the cultural meme of like, it's acceptable and cool and kind of like safe, but also it's risky, but in a good way, like to pour your energy into Substack versus, you know, a New York publisher or whoever else, like that was the thing that they had to inflect. And now, I mean, my God, it's amazing all the different writers that are on Substack and this sort of network effect of that meme, right?
There is a little bit of an actual network effect in Substack, the app and all that stuff. But I think the most important network effect is the brand meme of Substack is where a certain type of writer of a certain caliber goes. Anyone can build a Substack clone in terms of just like, it's a text box, you can hit send, it delivers emails to people, it does payments, all that stuff. A lot of competitors. But nobody has the brand that Substack has. And so it's a similar deal in tech with Cursor. It's a similar deal in design with Figma, right?
Lex, the goal is to inflect that. And the most important thing we have to do, we have to build an amazing product. And on top of that, we have to create a cultural shift amongst writers. And so, I mean, maybe that's one way that I'm like, if I didn't have this idea of cultural evolution in my head, I would be thinking about like, well...
do we really have product market fit? Because I talk to writers and they're kind of like skeptical about AI. I have a lot more confidence that we'll be able to solve that. You know, and it's not, I would think have a false negative if I didn't understand the way that there's sort of a cascade of credibility that has to happen and it snowballs. And when it starts to happen, it can happen really fast. But before then, it feels kind of agonizing and slow. Another way maybe that it helps is like,
Really spending time building relationships, finding the early beachheads of the people who are going to be champions, who kind of have one foot in both worlds. So I think probably a mutual friend of ours, Dave Perel. Yeah. Someone that I talk to a lot because he's got his feet a little bit in both worlds. He's fully bought in on AI and tech, and he's deeply connected to the tech community. He's also an incredible writer that's devoted a lot of his career to just the craft of writing. So he's a great person for me to pester with questions a lot and hope that he...
You know, also when people like him send us feedback, like taking it very, very seriously, right? And thinking about like, you know, there's the whole idea of land and expand within a company, right? Yeah.
It also happens at the level of just cultures of like, here's a community of people that perform certain craft of like how they use. And like those early beachheads are critically important. And it's kind of hard. If you just look at the numbers, it's kind of hard to see like, wow, something amazing happened this week. But if you understand the actual underlying process, you know, that's really important in how to allocate your resources to really focus on the things that matter.
And so I think maybe it's a way to help people understand their numbers a little bit better and have less of a naive mental model of startups than just like, all right, it's an equation where you put a certain money in for customer acquisition, you have a certain product, you have a certain onboard experience, you get a certain retention and revenue out the other end, and let's just optimize that in a naive way. Well, there's a bigger picture of all the other mechanics that are happening in society that cause your product to
to have a certain CAC or whatever. And like, these are all inputs to that number, you know, even though it's harder to measure. Yeah. And you know, we had a similar thing with a portfolio company that we invested in that was for graphic artists. Right. And, and one of the things that we found was the unlock there because they were just as skeptical, if not maybe more so than writers, um,
The unlock that we got there was we sent the program and the painting device to a graphic artist and said, hey, just feed your own work into the system and iterate with yourself. And he went from being deeply skeptical to.
to absolutely loving it. Oh, cool. And started, guess what? When you love something, what do you do? You tell all your friends about it. And so are you seeing similar things with the unlock to get Lex to get writers further up the S-curve? You know, that's so fascinating because we have a thing that's coming out soon.
Now this makes me more optimistic about it. I was already optimistic about it. But yeah, train Lex on your style or whatever is something we can kind of do now, but it's a little bit buried. It's not super obvious. We don't really market it very well. It's not in the onboarding.
it's about to become much more front and center. And so that's great to hear that that was a huge unlock for this. This is why I love having these conversations is because I feel like there's so many, back to your point about cooperation, right? There's so many things that we can learn that are just, you know, these like specific things of like, hey, this worked here. Also again, back to cultural evolution, right? Like, let's try it. Yeah. Whatever. So,
Yes, this is something that's in the works for us. And now I'm even more excited about it. We'll see. But it's definitely, I think it's a big thing with writing is giving AI lots of examples of your previous writing and maybe some other contexts and instructions. So that's, yeah, coming soon. Very cool. We're going to be very excited to try that because as you know, we are enthusiastically testing Lex, all of our writers'
One thing is I've been trying to train our in-house AI on my writing style that just for what it's worth, it might be valuable to you or not. So first thing I did was put chapters from my various books in and it was okay. It wasn't great.
Yeah. Next thing I did was did put less formal writing that I did. Like for example, Yahoo finance, isn't it amazing that Yahoo is still around? I mean, just as an aside, but
Yahoo Finance had this thing about 10 or 15 years ago where they asked a bunch of market people to do market commentary for them for Yahoo Finance. And so I, yeah, sure, I'll do that. And it was much less formal writing, a bit like the blog posts that I used to do for my asset management companies. So then I put that in there with the books and got better. Okay. Got better. But here's the unlock.
I started writing letters to my children when my son, Patrick, today's his birthday, by the way. Oh, I've written to him. And, you know, it's weird having a child turn 40 years old because you're like, damn, I'm getting fucking old here. Anyway, I thought- Not in spirit. Thank you. I thought it'd be nice to like write the kids letters over a long period of time and then give them to them on like their 21st birthday.
So I started that when he was four days old, six days old. And then I finished it when my youngest daughter turned 21. Over 30 years, I took the letters and put them in. Bingo. Wow. All of a sudden, it was really good at mimicking my writing style.
So just a hint. Fascinating. Did you give it like, did you have it generate a lot of instructions on like, here's my patterns, here's my style, or is it just the examples? Yeah, it was just the examples. But when, and then I would ask, I would breed. I would ask it to write something about a topic that I like am interested in, but to do it in my style. Then after I got it to the point where I was pretty happy,
with the way it was able to mimic my writing style. Then I did all of the, please tell me, please analyze this writing style. What sets it apart? What makes it worse than it could be? What would you suggest to make it better? And that all worked out really well afterwards. Fascinating. Okay. Well, I'm excited to try that. Yeah. And it's cool. And, you know, like it was, and it wasn't something like,
The thought just occurred to me. It was like, you know what? I wonder if like letter things that I wrote that were never meant for wide publication. Right. And that got me on this, like, cause we're creating little bots of our favorite authors, for example, not for anything commercial use, but just because we love doing it.
So one of the things we're trying right now is authors where there is a huge body of private correspondence. And so we're doing that test to see how that works out. But, you know, that's honestly, that's just more for the fun of experimentation and what we can learn about how to make these models perform better and better. Well, I'm getting my first, we joked about this. Yeah.
My producers are basically, Jim, you are such a fucking gas bag and you keep going on. And,
And you got to remember, you know, your, your guest has a life and it's not going to be just spent talking to you for the entire afternoon. So they, I, they, they've sent me the hook on my iPhone and what I've taken to doing was turning it upside down and turning the ringer off so they can't ping me. Yeah. But I noticed, I noticed it just, just there. Yeah.
Another thing I always wondered, like, I think you agree with me that writing is thinking, right? Absolutely. Yeah. What would you say was the greatest insight that hit you when you were writing about something that you were like, holy shit, I didn't even know I thought that. Oh man. I don't know if there's like one that comes to mind. There's a lot of, there's a lot of little ones. I feel like it's almost, to me, it's more just like,
every single time that I sit down to write something that I think is pretty straightforward, I'm like, oh, this is slam dunk. I'll just bang this out real quick. There's like 10 things. And so it's hard to remember any one of them because there's so many, but it's like I'm thoroughly off the mark in some ways from what I thought I would be and ends up always feeling so much better once I've actually laid it all out. I think the big...
you know, naive conception that a lot of people have about writing is it's a way to communicate our thoughts. I think writing is a way to form our thoughts. It's a way to design them. Right. And so now, because my expectation is that's the case, like it doesn't, it almost like it's hard for me to remember one because it also doesn't surprise me anymore. It's just like, well, of course I have no idea what I think until I like, right. You know, like I just, whatever, I can regurgitate some stuff, but like, and I think to me also talking does help. I for sure learn things when I talk and,
But it's a little more slippery than writing. Writing, it's a lot easier to kind of really pin it down of like, oh, this is the bit where there's a wobbly assumption or this other thing occurs to me that wouldn't have occurred to me otherwise because I see it in front of me, you know, and it's there fixed. So man, I wish I had one great one for you.
I mean, maybe one is just the Disney princess co-founder thing. That's kind of recent. I like that one a lot, actually. The idea of a Disney princess, like I need another person to complete me in some like identity or emotional level didn't occur to me until I started to write about it. And I was kind of like, oh yeah, like that, like what's the essence of the thing I thought? It's not just about like I was optimizing for probability of success or whatever. There was like a deeper aspect to it.
And I realized that was it is there's this kind of fairy tale of co-founders that's like can be true. You can have in the same way that there's fairy tales about princesses, like couples can be happily married. It's like a huge important part of life. There is also like, it's not fully reality, you know, like everyone who's in a long-term relationship knows that it's not a Disney movie, but
And there's a kind of a similar thing about co-founders. And so kind of connecting those dots was something I probably wouldn't have happened without writing. I don't know if it's like the biggest one, but it's like a recent good one. Yeah, I like it. And I think you're right because the other thing that I think is great about writing is just to add to what you said, because I agree with that's how you discover what you think. It's not how you convey what you think. It ultimately ends up being conveyance. Yeah.
But the idea that we have a lot of thoughts that are just banging around in our mind and
And I have this weird idea that I love quantum physics. Right. And, and so it's like our ideas that are still in our mind are just existing in a probability state. Right. And if you don't observe them through the act of writing, you don't collapse that probability wave and they just keep, and then they go away.
Then they fly away. If you want to really see them and examine them and question them and look at them from both sides, you've got to observe and put them out into the real world. And the way you do that is through writing them out, in my opinion.
So I think things like Lex are all to the great for everybody just to help them. I mean, even if you're not a writer, I would recommend playing with Lex because it's a great way to deal with things you're struggling with, right? Absolutely. Because as you made the point about marriage, the story is often idealized and the world's messy.
And, you know, stories are often your metaphor of the Disney business is perfect, right? Like, this is really the way the world works. Yeah, it's like there's some truth, but it's not the whole truth. Exactly, exactly. Well, this has been absolutely delightful for me. I am very enthusiastic about what you're doing with Lex. We will give you the good, bad, and ugly feedback from our team of writers. I'm very fortunate to
to have a pretty incredible team, really incredible team of writers and editors. So you should see when we put them in shout-offs. It's very interesting. I'm joking. I'm joking. But if you've listened to the podcast in the past, you know that my final question is always not a question. It is a mind experiment, and that is this.
We're going to make you, for one day, the emperor of the world. As emperor of the world, you cannot kill anyone. You cannot put anyone in a re-education camp. In fact, you cannot compel, at the barrel of a gun, anyone to do anything. But what you can do is you can whisper two things into a magic microphone that will incept the entire population of the world together.
They're going to wake up whenever their morning, their next morning is, and they're going to say the two things that you incept, they're going to think they thought of them and they're going to say to their significant other, you know what? I just had the two greatest ideas. And unlike all the other times when I didn't follow up on them and, and really try to habitualize and internalize them, these two things I'm going to do that with, what are you going to incept in the world?
The main thing is allow yourself to do the thing you've been wanting to do. I think our bodies contain an incredible amount of wisdom and just the idea of motivation. Like what are we motivated to do? And almost every time in my life when I've gone with that feeling, it's led to great things. And sometimes it fails, but like the process of going through that failure is like so important.
And I think there's kind of like, you know, two types of learning. There's learning by doing and there's learning by like kind of doing what you're told, you know? And I think this world could stand for a lot more learning by doing and like, let's set aside our expectations and our preconceptions and conventional wisdom and whatever else. Just tap into your own excitement, curiosity, motivation, and like go with that. And of course it's a dial, right? If everyone did that all the time, 100% of the time, like it'd be a little bit too chaotic, right?
But I think the dial could be nudged in that direction and the world would be a net better place. And in a way, that's one of the most exciting things about AI. As I do honestly just think, it's like it reduced... The coolest conversation I think I've had yet with someone about Lex was there's a reporter at a big name publication that everyone's heard of who was like, oh, I might actually be able to do this book I've been thinking about.
It's like, yes, fuck yeah, do that. That's amazing. Because the idea of reducing the barrier to getting started and going with it, and this is the kind of person, they're not just going to be like, generate a novel about XYZ. No, of course not. But the whole idea that it's a wind in your sails, right? To do the thing that you want to do. I just think that's going to make the world...
a better place, a more beautiful place. Like that's just do the thing, allow yourself to do. Cause the thing that we already kind of want to, we just don't allow ourselves. Right. That's the point. So allow yourself to do that thing that you want it to do. That's number one thing. And then maybe the other, the other thing, cause there's supposed to be two, right. Is, um, and understand that like, that's like everybody else has that thing too. And they're not really that different from you. Right. I, you know, we're, we live in a time where there's so much like
We don't see other people as people. It's kind of like PVP on the internet or whatever of just people being assholes to each other. And there's a way in which it's fun. People like to fight. People like to feel self-righteous. They like to feel like they have some solid sense of right and wrong. And the ambiguity is kind of hard to tolerate. But the fact is there is ambiguity. And...
If you're a... Maybe this is another place where cultural evolution helps me have an intuitive grasp of this. It's like, yeah, if you grew up where they grew up, you had the experiences they have, hate to break it to you, you'd be the way they are. You know? Like, you just would. And so that's tough, right? Like, I...
live in LA. I am from Texas. A lot of my family's from Texas. So I think I have a foot in different types of cultural divide or whatever, but I feel like on... And this is the most whatever. It's not a new idea, but if I could incept it in people, I really would. Just tolerance, a little bit of empathy. It doesn't mean you have to back down from your principles. It doesn't mean...
There's no such thing as right or wrong or whatever, but it just does mean like, let's like not like, let's see each other as people or whatever. And I do get worried about the level of division and polarization in the world. Now it does feel qualitatively different from when I was a kid, you know?
And when I talk to people who have been around longer than me, they usually say something kind of similar. So I don't know where it heads, but it doesn't feel like it's an amazing spot right now. And so if I can set people with that idea too, of just like giving each other a little grace, that would be the other thing. Yeah, I love both of those. And actually that's an idea we have for when VR gets a little cheaper and better. Wouldn't it be cool if you could put on a VR headset and literally experience another person's
view of the world. Yeah. To the, I think it would help with compassion, with acceptance, with allowing for grace, because what you said about, Hey man, like if you grew up where they grew up in the culture, they grew up, you would be saying exactly the same things. And that is really hard for people to get just intellectually. Yeah.
But like putting them in that world for a day or whatever, I think that that's an experience that might really change people and get them to understand, oh man, okay, I maybe really do need to kind of more focus on the fact that like, for the most part, we are more in common than we have that divides us.
And focusing on that which divides us is just suboptimal in almost every way. So I enthusiastically endorse both of your inceptions. Well, Nathan, thank you so much for being on the pod and can't wait to watch all the cool things you do with Lex. Well, thank you. And likewise with everything you're working on and it's amazing to be here. Thank you so much for having me. Thank you.