Intelligence yields power. Intelligence correlates with power. We have a little bit of an intelligence advantage over other primates, you know, other animals, and we play God over all of them. So I'm like, you know, we're very used to being the king of the intelligence castle on Earth. That's the only thing we've ever been. That could be changing, like, next decade. What? ♪♪
My friend Tim is one of my favorite writers and thinkers. I've been reading his work for over 10 years, and he's followed by millions of people online. He's not a great artist, but he also does these amazing illustrations that teach us about these concepts, and he explains them and informs them in ways that really advance how all of us think about the world. He's someone who's followed closely by Elon Musk, by a lot of my smartest friends.
Tim's written a lot about AI. He's written about the Fermi paradox, about the great challenges humanity faces now and in the future and how we can think about them. Tim wrote an amazing book called What's Our Problem? Whether or not you're on the right or the left. I think it's inspiring for all of us to understand human nature and to try to inspire us to act out of the greater, higher parts of our mind. Tim's probably one of the most important public intellectuals on this topic and it's always really fun to learn from him about it.
I'm Joe Lonsdale. Welcome to American Optimist. We have my friend Tim Urban with us here today. Hey, thanks for having me. Thanks, Tim. It's great to have you here. I think I must have read your work at least 10 years before I got to know you in person. So yeah, awesome to have you. Yeah. And you kind of, you're a major social hub of Austin. So it's been fun getting to know a bunch of your world here. I'm sad because Elon's not here anymore as much as he's in the White House. Although I'm not sad about that in particular, but there's a lot of good people here. He'll be back.
Yeah, he'll be back. So you're a writer and illustrator of the popular Wait But Why blog. You're the author of What's Our Problem, which is this awesome book you've written about kind of what's going on in our society. And it's not really political, but it's sort of political, I guess you'd call it. I think it's one of the most important books I've read in the last few years.
Let's just start off a little bit of the history and things you've talked about in the past. I think it's fair to say you're one of the most forward-thinking kind of bloggers and thinkers online today. You went to Harvard and you studied government. How did you get into blogging? You know, I was running a small business, like an ed tech company with my friend Andrew Finn. And I was blogging on the side as kind of just a hobby, like five hours a week. But
I had liked that more than I liked running the business. So at one point, we just decided we don't need both of us here. And I'm going to go and see, like, what if I put 60 hours a week instead of five hours a week into blogging? So I started this new blog, Wait But Why. And... What year was this? This was 2013. I mean, I guess the thesis was...
You know, people said blogging's dead. That was like a common thing to say back then. And like the internet was just flooded with content. You had BuzzFeed now producing a trillion articles a week. And but the thesis we had was kind of that there's actually not very much high quality, like really high quality stuff. There's a lot of like B minus content, C plus content out there. And if you can just work really hard and really put out, you know, A level content that people will notice. That was the end.
And I think a lot of us know, I think all my friends are reading your stuff. Like you had a few that was super viral. What was the first one or two that went super viral? The first one that went really viral was actually like, basically it was an article about why millennials were so unhappy as people. And that, you know, they had some like awful perfect storm of,
They are super ambitious and told that they're going to change the world, but they're also told that they specifically are special, but every single one of them thinks that. And then they... Fortunately, you and I are special. We actually are. And I actually call everyone on that. And in the post, I say, I bet the readers right now are thinking, oh, good point, but I actually am special and this is the problem. But then also, then they look on like Facebook, which is a new thing then, and they see all their friends...
kind of showing better versions of their life than they really have. So everyone thinks everyone's doing better than they are. And anyway, that blew up because it just hit that. It was the right thing at the right moment. I've learned with blogging, it's not the post itself as much. Like if I had posted that a year later, it might not have done anything. It's just the right exact week to post it by luck. There's some kind of like thing going on in media and culture where you have to catch the wave. Exactly. And it's just the right thing. That's why the way to do blogging is if you want to build an audience is just like take a lot of swings at the bat.
One of my favorite early ones was the Fermi Paradox. That was relatively early then? Yeah, that was early too. I was in the first year. And at the time, actually, I felt like I was venturing out into what I really, really wanted to write about as opposed to kind of trying to go viral. I was like, I actually didn't even consider that that post had a chance to go viral because I felt like it was just kind of like for nerds.
It's super interesting to think about it. I think about these things a lot, but I hadn't spelled it out as well as you have, which was really cool. You kind of like took a lot of our views and you pushed them further. Let's talk about this a bit. So Fermi paradox is like, why aren't there like tons of other intelligent species that we've encountered that we can see out there? And I guess the idea is like, how many stars are in the galaxy? Like between one and 400 billion. And then there's like billions of galaxies. Yeah. Like, like,
like more galaxies than there are stars in the galaxy, like almost a trillion galaxies we've found. This is absolutely crazy. It's an insane number of chances. It's a wild number. This is really what's called a great filter cue. Like why, like what's the hard thing for intelligent life? And I originally thought, oh, maybe they're just like,
not just like single cell life that develops. That seems really, really weird for like things to turn into that. But then it turns out it's actually maybe not that hard for like self-replicating like RNA and single cells potentially to develop themselves, right? We don't know. I mean, so there's two kind of camps, you know, because there's a million theories. Yeah. But you can group them into two camps. One camp kind of fundamentally believes, of course, there's lots of life out there.
Like we're not special. That's like the most naive human thing to think is that we're the only one. So they think that and they come up with all these other reasons we might not have seen. We might be in a zoo or they're protecting us. Exactly. We're looking for the wrong signals. Like we're like in a building today with a walkie talkie.
no one's here and there's no one coming through the rocket because no one's on one or anyone who speaks up goes and gets destroyed by predator species exactly which is terrifying yeah or there's just it would be very easy if they wanted to conceal themselves from us it would be as easy as we we would have to conceal ourselves from kind of birds in the zoo if we wanted to yeah when they're smarter than us like we they can just fool us they're just living on a different plane potentially yes now the other group is says actually you know
even if most of them tried to conceal themselves from us, a species of a certain level of advancement would, in a blink of a kind of an eye universe-wise, would start to colonize so many stars and wrap Dyson spheres around the stars and kind of make, it would be obvious. It would look different. Life makes things, think about humans. When you look down at the planet, like,
the planet looks different than it did 50,000 years ago. Cause like my neck radiation coming out, everything's different. Exactly. Yeah. Like it's just, we would notice something. And the fact that we see nothing they say is actually points to us being alone. So I don't know who's right, but the people who say we're alone, they come up, they say that there may be a great filter, something that either the origin of life is super rare or maybe going, maybe, maybe prokaryotes are everywhere, you know, a little bacteria, but that
But going to this eukaryote kind of like more complex cell is maybe the great filter. Maybe that's really rare. Or maybe there's a lot of that. But actually, it's very rare for something to get to specific human level intelligence. For some reason, it's just very, very uncommon. It took 600 million years to happen here. Maybe it's very rare. Or maybe there's a lot of people like us. And we're not doing that much yet. And then AI kills everything.
everyone that's the scary great filter that it's ahead of us yeah the scariest answer is that the great filter is ahead of us and this is and of course this ties into AI and into into SpaceX and all these things they're all kind of related right because yeah you know it's it's interesting to me I agree I'm so inspired by SpaceX I think you're right this is like an inspiring positive thing for the world so I'm a huge fan of what you want so just for that sake and what it does to us and Elon talks a lot about being a multi-planetary species
i i'm i think what he's doing is really good so i don't want to criticize but i but to me that being the great filter seems unlikely um although i guess it is possible that you just like blow up all your planet or destroy a plan there could be some technology that just makes the planet unlivable i guess yeah i mean we're building such insane technology in the future that you know even just ai manufacturers the the most the worst pandemic that you can ever imagine and just like we have no chance
But maybe if we're on two planets, maybe it's somehow harder. I mean, Elon calls it life insurance for the species. And I think Stephen Hawking said that the dinosaurs, you know, went extinct because they didn't have a good enough space program. You know, and that, you know, there's definitely something there that's positive. Think if there's a solar system far away we're observing and you're rooting for them to go extinct.
What you don't want is them to start spreading out. Oh no, now they're on three planets and seven space habitats and they're on moons. And now they're probably going to stick around for a long time. So we have all our eggs on one planet right now, basically. That's fair. So it seems like a universally good thing. Although if the great filter is like AI or something, it probably could hunt us down on multiple planets. Yeah. It's a scary thing to talk about, but you have to kind of understand. True. No, if it's something like really bad AI, it doesn't seem like AI would have, and they wanted to wipe us out. It doesn't seem like being on multiple planets in the solar system would really help.
uh maybe if you're in different solar systems and now there's a speed of light you know big speed of light difference between you but good overall though to go multiple planets definitely insurance premium but like at the same time it's like there's lots of scary filters that could be coming up and yeah we just don't know i mean that that's what's that's what's wild about this time is like uh you know the technology is power right i mean technology is power to your footprint gets bigger you just are able to manipulate
and the environment more and more and like technology is exploding. So like, so is our power, you know, God-like power is kind of like, yeah, I'm very nervous about the Fermi paradox in the future. Still myself. It's one of the great filters, but let's talk a bit about that. You said in 2015 that it hit you pretty quickly that the world of AI is not just an important topic, but the most important topic. And you wrote, uh,
an amazing piece on AI that everyone should read on the weight, but why blog, but let's talk about it a little bit. You're, you're obviously prophetic because AI hadn't really been as much in the consciousness in 2015, a bunch of us were trying to invest in it, trying to figure it out, but obviously it didn't really explode like as something that was working until the last few years. So what did you see that led you to this conclusion? What led you now? I think that, uh, actually it's, I think that, uh, Nick Bostrom and Ray Kurzweil and some others, I think they're, they're the profits, um, because, um,
All I did was read some of their book. Basically, my judge for whether something's a good thing to write about is I just use my own self. Do I find it interesting? I assume that my audience is a big group of me because they're close enough that usually that's a good gauge. So...
When I hear about when I started hearing about AI in 2015, it was really that's pretty early where I think people forget how recently it still seemed just like a sci fi concept. AI seemed like something that is in sci fi movies only. And 2015, that started to change. It was actually 2014 as to first start hearing about it. I wrote about it the very beginning of 2015.
And I started to hear about real serious people talking about it. I said, what's going on? So I read Nick Bostrom's book. I read, you know, The Singularity is Near by Kurzweil. Nick Bostrom's book is super intelligent. I mean, these books really, I think, were way ahead of their time. And my eyes were wide. And so I said, holy shit, you know, and if I feel this way,
I'm sure my readers are also kind of hearing about it, wondering what's going on, and they're going to have their minds blown. So I decided to just basically take everything. I read then a bunch of more things, and I just tried to synthesize what's the story here. And my conclusion was that this is the most important possible thing because, like we were just saying, intelligence...
It yields power. Intelligence translates, you know, correlates with power. Oh, so we have just a, you know, we have a little bit of an intelligence advantage over other primates, you know, other animals. And we play God over all of them. So I'm like, you know, we're very used to being the king of the intelligence castle on Earth. That's the only thing we've ever been. That could be changing, like, forever.
Next decade. What? Like in our lifetimes. And so it was just, it just, I can't believe everyone's not talking about this. And so then I wrote about it. And you knew that might be changed because these other guys books, they kind of convinced you it was coming. Yeah. I just started to understand what was going on, which was, I didn't, you know, I just started to understand the concept of, of, of, of that, that real intelligence can exist in a computer and that it,
and that it's going to, it's getting rapidly smarter as Moore's law continues and there's more processing power. Uh, and we come up with new different architectures at the time. You know, the thing we do today, neural nets and deep learning, that was just very, that was one of many potential ideas then. I don't think it was so obvious it was going to be the thing. No, until they came up with the transformer architecture in 2017, it didn't, it didn't make sense. It was one potential interesting, you know, pathway. Um, but,
but the thing that these books made me understand is that, um, that if, if, if AI can start moving up in intelligence, it's not going to stop at the human intelligence station. It's going to just race right by us because all of these limitations that our intelligence has. So it's the size of our skull. Like the brain can't get much bigger, even if it could, it doesn't move. Learn neurons don't communicate quick enough for it to function as one system if it were too much bigger. But,
AI brains are a speed of light. They could be huge and keep going. You know, AI is editable, upgradable. All the AI in the world can share knowledge with each other and, and, and be this collective intelligence. There's just, you know, suddenly it's going to pass us. And when it passes us,
It's not just going to be, you know, so this is the, this is the core concept that I think people, because people think, okay, well, the chess AI has been beating humans for 30 years, 40 years. Like, um, so AI is already smarter than us. And that, that's when you get to this key concept that I first understood in 2015, which is that there's two axes of intelligence. There's, you can think of the horizontal axis as magnitude. So that's like the chess AI is higher than us at chess. It crushes us. Yeah.
And then the other axis we can call like breadth. So it's like narrow intelligence versus general versus super intelligence. So when people talk about narrow versus general, they're talking about what I'm calling the X axis, the breadth. And this is the thing that we're...
unmatched at, including today still. So AI is amazing at narrow skills. When you give AI a narrow skill chess or, you know, direct, you know, map directions or, you know, language, a hundred things, it just quickly becomes better than us at it, you know, because again, it doesn't have these limitations. And, and, um,
And animals, you know, elephants can, I think, smell water from 400 miles away. You know, other animals use echolocation or, you know, the dog's ability to kind of have a whole landscape of smell. Other animals beat us at certain narrow skills, but none of them can match our breadth. So this is called general intelligence, which is this idea that we can be good at anything. We can learn any skill. A dog can't learn any skill.
And at the moment, neither can AI. You know, one of the examples you use is we have, you know, so LLMs today are more general than they used to be. So LLMs can do a wide variety of things and really kind of impressive. But they're more statistical engines and reasoning engines, right? Yeah. And here's like a litmus test that I forget who I was talking to. Someone brought this litmus test up, which is...
LLM still can't run a gas station, you know, just because it involves all of these things that for us with general intelligence, it's no big deal. For an AI, it's going to have, it's just, it's not there. So we're still the king of the general intelligence castle for now. One of the interesting frameworks for this that I thought
I want to push back on the super exponentials. My general view is it's going to be, it's extremely important, but I think the world works in like these S shapes. So I think a lot of times things can hit an S for quite a long time. And so one of the interesting questions is you try to do things we can't do with AI and LLMs and our reasoning right now, our reasoning engines, is that you don't know what questions to ask to get the job done.
There's like a space that you can call it question space. And like one of the hypotheses is like the neural net for question space is a lot harder to build and define because there's only so much feedback you can get from reality to make question space better. And because think about it, if you want AI to do something, you have to know all the questions asked to get it to do that. And so if you're like thinking in question space, it may be there's only like so much data and input you can have that you can only have a certain size neural net to do question space and get it right. And then one of the hypotheses is that human brains...
actually grew to the optimal size and no bigger of like of like how much data there is for questions based data rate and therefore there may be like natural limits of certain types of intelligence if there's not more data to iterate on this is all totally unprecedented so we we literally don't know if we're about to hit a big wall yeah or if the wall is way way higher the ceiling's way way higher but what's what's interesting about like ai for until
And really until pretty recently, mostly was symbolic AI. It was programmed, specifically programmed logical steps that the programmer, the developer programmed and it executes. And so the programmer can tell you exactly why AI is doing what it's doing and how it works inside. This is a totally different thing. People say we don't,
deep learning networks. We don't build... No, we're building giant brains. No, we grow them. We grow them. So you basically build a nursery and this thing grows out of it and then we say, okay, who are you? What can you do? Are you conscious? Can you play chess? They didn't know that GPT-4 could play chess until...
like a year after it came out and then people realize, oh, it's good at chess. - So this is a decade later since you wrote this and obviously a lot's happened now. I think a few years ago you said in a Reddit AMA that you were in favor of pausing any research 'cause it's kind of scary and we don't know what we're growing here. Like, do you still feel that way? Like, how do you think about this? - I mean, we're in a quandary here because we have to ask what is the scarier prospect? An arms race that throws caution to the wind and tries to accelerate the process of building these superhuman brains
Or slowing down and risking that China gets ahead of us and now has a massive technological advantage that might be permanent. Those are both terrifying, I would say, to an American, or they should be. And the question is, what's scarier? What's the bigger risk?
And so you can try to find that middle ground and say, listen, we should try to stay ahead, but go cautious while staying ahead. But now, you know, China has shown that maybe it's going to be harder to stay ahead of them than we thought. Maybe this limit on chips is not going to... Well, they're good at stealing stuff and then iterating on it. Right. So...
I honestly, I wish I had a good answer here. I think that we have to try to, I mean, what you would want is everyone to become scared enough of this thing to feel like it's, you know, people should feel like Game of Thrones where these are the White Walkers coming. And that's how you should think about it. No, it might not be. I think that there's a good chance AI is great and doesn't hurt us. The point is, but the downside we have to worry about because it could be bad. And if it's bad, it's going to be massively powerful. So we should think about it as, you know,
I think that I would hope that China and the U.S. could be grown up enough to say in this moment, like, let's not create like a giant existential risk for the whole planet by racing. We have to come up with some. Now, maybe that's naive and maybe they'll never happen. I just don't know. The problem is any time they're going to try to put some rules in place, you know, they're going to be gaming it to try to trick us because there's no trust for good reason because they have stolen a bunch of stuff.
So with nukes, you know, we did bring down the number of nukes, right? But nukes we all kind of understood about how they worked, about where they were going. This is a very different thing. You can understand like, okay, the nukes are going to get bigger maybe or something. Like Zarbombo is just like a much bigger scary thing. And that's like kind of crazy. But I don't think it's analogous to like this. We don't, not knowing where this thing goes and what we can put in place to have an advantage or not with it, you know? It's still also required of a human to set off the nuke.
massive industrial complex like you could have like 10 guys doing an ai right right and and the it's like it's if the nuke itself could wake up and and start a nuclear the nukes themselves could start fighting each other without our permission or consent without our control like that's what we're dealing with so you really think that's the biggest threat because for me i still think it's people using ai for the next decade that's going to probably cause the problems you but you're worried about the ai itself at some point i'm worried about both i mean i
I think that in the near future, of course, you're going to have a higher risk of bad guys with using AI to do things like manufacturing scary bioweapons, things like that, that you really-- things that could really kill a billion people. I mean, people with AI can-- there's obviously the smaller concerns where you can start creating deep fakes and all of that. I mean, that's a big concern for civilization as well. But I'm thinking bioweapon or autonomous
you know, swarms of drones that can go into a city and specifically, you know, so, but I think as time goes on, I just think it's totally unprecedented. We're literally like monkeys building humans. I mean, like we should be scared. We should be scared of that. Like it could turn out well, but you kind of have to get lucky. And there's a lot of people that think we're just not, we don't know how to do this. We don't know how to align this thing before it becomes too powerful to change. The bioweapon one is probably the one that scares me the most. I'm not to talk about horrible things, but you know, I do think it's,
I do a lot in biotech as well. And we have all these new machines that are able to like construct and build like, you know, chains of things and build viruses and stuff like they did in China. Like maybe we should be regulating those. Like I'm, I'm like so against regulation for most AI stuff. Cause I don't think these like random people paid a hundred K you're going to be able to like save the world from the AI. I think it's going to harass us and break things, but maybe there should be some rules around certain things in bio or something like, well, so actually I think, you know, you, you even helped connect me to some of these biotech companies and talk to a bunch of them.
And I asked, you know, they're all very optimistic. You know, most of the people building things are. And I would get so excited talking to them about, you know, the incredible things they're doing. We're going to cure a lot of diseases. Oh, my God. Cancer. I mean, Alzheimer's. I mean, and then augmentation in ways that will become normal. And I don't think anyone want to come back to this world when we have this dumb medicine that, you know, smart, precision medicine.
And then I would ask, you know, does this same technology, you know, help the people who want to make bioweapons? And just, yes, every year everyone said yes. And so that's going to have to be something we, you know, that, that,
but then hopefully maybe also the same technology can help create really rapid vaccines um really rapid you know so there's gonna be you know it's like cyber crime and cyber security i mean there's gonna be a good guys versus bad guys thing going on here i guess we don't know this is a fascinating thing about warfare last few thousand years is you have the offense and the defense which is better right like cannons like are such better offense that they changed the dynamic and you got rid of small cities like i wonder who wins an offense defense and bio i don't i don't know it well enough and if the problem is a bio is not like you know a canon
that can, you know, screw a city over if,
the offense wins with bio, like a billion people, you know, I think it's scary. Cause you can imagine an AI or some evil person, like creating something, maybe shouldn't be talking about these, but they could hide in you and then get turned on all at once by something. And it's just like, there's like really scary things they could do. Totally. And that's a theme. Honestly, I mean, you look at any technology right now and it's like, wow, the power for this to do good and like revolutionize everything, solve all our problems. And then it's like, wow, like the, the, the bad guys and with this in their hands could do some really bad things. And,
It's just high stakes. This is why it's tied to the Fermi Paradox very closely in scary ways. Right. So one, I know we're on the optimist show, but the pessimistic view that people would say something like, look at how this is going. It doesn't matter how good the good guys are. It takes us in one of these fields, the bad guys, to get an edge for a little bit and they can just wipe everyone out. And that's got to be so...
that that's why we don't see they all do this. And then again, optimists have a bunch of different arguments. I guess it makes you more biased towards Elon's like, let's have the backup planet. Let's have maybe some people living out by the asteroid belts. They could be like weirdos, but maybe just in case, you know, they can repopulate us or something like that. And beyond just Elon, like it's, it's an art. It's when you remind yourself what's going on and this crazy stakes, it's like,
let's be grownups, everyone. Let's be wise. How can we be wise together right now? Like stop being childish. Okay, well, let's go from being wise to your book because that was, so you spent like five or six years all of a sudden on this crazy side quest to like figure out what had gone wrong with our civilization. It's called What's Our Problem? It's the book. So what inspired you to do this? I mean, exactly this conversation. I was writing about stuff like AI. I wrote about neuro, like I wrote about cryonics. I wrote about all these incredible things. And I was thinking about
you know, because people focus on the dystopia, but like the utopia that we could have, I want to be there. I want to be in the good 2050. It's possible for us to solve like every problem in our society right now. Everything, including our own mortality. I mean, you can really like crazy things, like anything humans are scared of, we could solve.
And then the dystopia is really scary. And that ranges from, you know, full apocalypse to we're in a permanent kind of dystopias with an AI dictator to just we're set back to the Stone Age. The whole thing blows up and, you know, now we have no technology. We're trying to learn how to do electricity again. Does this ever bother you that we just happen to be alive right at this time when all this is happening? Isn't that kind of suspicious? Well, I think on one hand, it might just be a coincidence. On the other hand, like a lot of people,
People are around right now. The population is swelling. There's going to be a lot of people here in a time like this because there's going to be the population so much bigger than it used to be. So it's also not that big a coincidence. I think one-fifteenth of humans that have ever lived are currently alive.
So of course we happen to be alive. It's not like a one in a million that we were here. It's like a one in 15 that we were here in a way you can think about it. That's a good way of putting it. Yeah. If I was going to have like a show where like aliens got to go live as like humans on this fascinating planet as a piece of art though, this would probably be the time you'd have them be alive.
If you're watching the, if you're observing earth, this is when you're saying to everyone, Hey, Hey, Hey, Hey, come here. Come here. It's getting really good. Right. Because this, this is when, this is when are we going to pull it off? Basically, are we going to pull it off? And are we going to kind of be one of the cosmic civilizations up in this exploring the galaxy for the next hundred million years? Or are we going to just ruin it all right now?
right now. And so I'm having these thoughts and then simultaneously, you know, I'm on social media, I'm reading the news and I'm just seeing us descending into an extremely childish culture war where you have grownups behaving like middle schoolers, you know, clicks and, and punishments for saying the wrong thing and gotchas and lying, just misleading. And, you know, using, you know, using populations of less than four,
less fortunate Americans as kind of pawns in your game against each other, just status, obsessed status, middle school, middle school. And I said, what is going on? I don't believe that these people are all just inherently immature. I think that something is in the air that is bringing out the worst in a lot of people. And it's, and that's what I, that's what I was trying to say. What's our problem? I said, what is our problem is,
what I wanted to dig into. And of course that's a, quite a big rabbit hole. It's a huge rabbit hole. So you created a bunch of frameworks that I thought were really interesting. And we're going to probably just show a couple of these to directly like one of the frameworks I really liked is like you had an X and a Y axis. And I think on the, on the Y axis, it's like,
the high end is like a scientist is like that you're operating as a scientist, but on the very low end, you're a zealot. And then in between the, from a zealot, you become like an attorney trying to argue for yourself. And a little bit above that, you're like a sports fan, but maybe you could be convinced. Tell us a little bit about some of these frameworks. Yeah. The big idea is it's like a vertical axis that I just think we need in our conversations. We have left, right, center, right, far right, far left. Like, and we need that. That's important. And you can extend that to a square and you can have, you know, authoritarian libertarian, you know, and, you know,
whatever, you know, we people, you see the compass. Yeah. That's still a, what you think space. Yep. We need to also just ask ourselves, how did you get there? Did you get there because you are identifying with a certain tribe's beliefs and that's what the tribe said to do. And you're just lockstep following and you're only reading things that confirm that. And you're not at all actually concerned with truth, even though you think you are. Yeah.
Or did you get there because you learned a ton and you started off with a humble, I don't know, and you slowly started to gather evidence and you eventually arrived there way later and you could happily defend that, you know, in a debate in front of, you know, where does your, you know, what's your certainty made of? You know, is it unearned? You know, is it strongly held weakly? Is it strongly felt weakly?
or strongly held, weakly supported beliefs or not, right? Or strongly held, strongly supported beliefs. So there's that, and to me, that access is everything. I don't care. If you're in what I call the low rungs, which is, you know, you're forming your, yeah, you're just in the tribal zone. You're not, no one's going to change your mind unless the tribe itself gives you permission to. And you're just kind of, you know, I don't, and again, I might even- And that's probably how we evolved as a species is we did work with these tribes in order to succeed, right? So there's like, I feel like all of us have a part
of our animal course that like even even if you do get something as a side of friends i won't mention who who you're very prominent guys in the tech world who are like scientists in so many ways but like but like i guess see they're being like zealot tribal people about some of these things right now because they're so convinced and they just and they just live in that area sometimes
Yeah, the way I kind of framed it is we each have kind of two minds in our head. This is like, you know, really kind of more primitive brain, which is just basically a piece of software that helps us survive in a 50,000 BC tribe, which is, yeah, conformity, rigid conformity and doing what the authority says. I love that if you in history were a thousand page book, visually you made, and it's like we basically all evolved from like all these things that are from tribes. Yeah. The first 990 pages of the thousand page human story are...
basically just you know trying to you know be a good member of an ant hill an ant colony tribe and survive against the other ant colonies so that's in our heads now yeah but then we also have this other crazy ability this thing that separates us from other animals this amazing ability to um to actually override that and to free yourself and become an independent thinker and actually you know um
and be in touch with reality and search for truth. And I think, again, when I talk about these rungs in the book, I'm not saying those people on the low rungs. I'm saying when all of us slip down there, we act like this. I think that's important is all of us have that. I mean, I think we have to be honest with ourselves a lot of times, especially for me when I'm more tired or I'm not in an uplifting mood. When I read or see something, there's a part of my brain that's just looking for, is this my tribe or not?
which is just messed up, but it just does that, you know? Of course, we all do it. And especially when, you know, emotions get heightened or something where really close to your identity gets, you know, gets brought up. The identity is part of like these views sometimes. Well, I think for the dumb brain, I think it identifies with views. Views are you. When they get attacked, it feels like I'm being attacked. The people who disagree with those views, you hate them. You dehumanize them. You don't care if they die, basically. That's that brain.
The other brain doesn't think like that at all. The other brain says, yeah, bring it on. I'm sure I'm wrong about some stuff. Like if you disagree with me, like let's figure out how you got there. Let's talk about it. Like it's just a totally different part. It's again, it's a grown up part of your brain. And I think that I first wanted people to just have this access so they could start assessing themselves. Try to, okay, remember we're all on this ladder. Try to be high up on it.
When I say scientists, I don't mean that scientists all think like this. Of course, there's a lot of zealot scientists out there. I'm talking about the scientific method, right? That humble scientific method. And then...
Secondly, when you're reading an article or absorbing information, you know, in your trigger who to trust, assess them on this. Don't assess them. They're on the right with me. They're on the left with me. Assess them. Are they, does this person, have they shown me that they're up on the high rungs when the way they think and form beliefs? Otherwise tune them out. Someone who's just repeating their tribes thing is not in real information. It's not interesting. It doesn't matter what they're saying. That's like most stuff online these days is just the tribal stuff. And it's really hard to go over
It's exactly. And you just know what people are going to say before the issue starts. And I can tell you, they're going to say this, they're going to say this. And these words, these specific words, because that's the words that are going around as the words you use right now to refute this argument. It's actually really tough for me because...
the feedback of the tribal stuff is really strong and positive from social media. So I have like an X account and I try to put interesting things up and I try to put thoughtful things up, but you're kind of like trained by the algorithm for like, what is going to get thousands of likes. And even the fact that you're thinking about that puts you ahead of it. Most people don't even know that they just think I'm awesome. You know? I do think that's. And, and, and like the, the, the algorithms specifically, specifically,
incentivize what I would call like low rung thinking and, and low rung rhetoric, you know, really tribal and also just, just, just really confident. Yeah. Cause in the low rungs, the more conviction you say something with, the smarter you seem on the high rungs saying, I don't know, makes you seem smart, you know, and the low rungs, it makes you seem wishy washy. And like,
You know, morally. She seemed like a rhino in the right. Exactly. Exactly. So, or even among the rhinos, which is its own tribe, you know, then expressing, you know, maybe some actually, actually, I think the conservatives are actually right about this. And then, then, then, you know, that you seem bad. So it's funny. Every single group is going to have an element. You know, some groups are going to be more tribal than others. Then with, you know, you're going to find more, you're going to find more like low rung thinking on the far left and far right. Not all of them. So stepping back, like,
you have a bunch of interesting graphs about things that went wrong in our culture. One of them that stuck out to me was people's perception of racial relations in our country, which it seems that really, really went down a lot the last 15 years. And it seems like some of those perceptions got caught up maybe in some of these low rung thinking on different extremes. Is that getting better now? Is it getting fixed? Where are we? Yeah, I mean, what you saw was-- I mean, I focused a lot of the book on the book movement because it was very current. And it was just such a good example of what's going on, where you have the history
you know, civil rights and women's suffrage before that. And, you know, gay marriage more recently. That's like a proud part of America. That's what makes America great is that a group that isn't, you know, getting fair treatment
treatment can fight for it and get it. That's it's, it's, it's revisable. It's editable. Yeah. And the history of America is just full of that. You know, new immigrants come, they're not getting treated and eventually, you know, treated well. And eventually they rise up and they become powerful and they get their rights. And it's just, it's an amazing, you know, it's really cool. I'm Irish on one side and Jewish on our side. And both of those groups got like the shit beat out of them in America for a long time and like really nasty ways. And it's very funny. Cause now I'm seeing, of course, it's like the sign of privilege, which is because we've shifted, which is good, which I guess I appreciate, but it's funny. I guess don't ask my grandparents about that, you know?
I would call that a hot, that's like, you know, that's what I would call like a, those are high rung movements. They're there. I like to think of America as a, uh, like liberalism, lowercase L liberalism as a, like a house that we're all living in and support beams or stuff like free speech and free assembly and the, and the, and you know, um, free press and whatever you want to say. Um, uh, uh, the justice system, these are the support beams of the house.
And I think the healthy way that America is, is when you have people who totally disagree with each other within the liberal house, who all agree that the liberal house is good. The constitution is good, but we're not achieving it right for this reason. They say, no, you're wrong. We're not achieving it right for this reason. And this is actually what the founders intended and blah, blah, blah, blah, blah. And they're arguing about healthy. Yeah. That's what they should. And, and, and this group's getting left out and like, no, well now this group's getting, okay, get perfect. That's what it should be. That's never going to have everyone happy. That's the best case scenario. And again, left, right, and center all inside the house.
And now I think what people had to realize is that some groups are literally not in the House. They're outside with a wrecking ball. The postmodernists. Yeah, just Marxists. You could say some elements of the far right today. Like they're actually with a wrecking ball. And what they're doing is they're not saying the liberal house is good. First of all, like, you know, the Marxist left.
you know literally they're they would they would agree with me here they say that liberalism is itself a bad exploitative system that entrenches power of the powerful people whatever so their goal is and the house should be complete they want revolution liberation from the house they purposely try to break things in our society and our cities in order to create the revolution people don't realize this about the marxists in our cities they're literally trying to break yes because that caused revolution so they would probably admit this i mean the the
Again, this isn't like their secret belief. Marxism literally is the belief that liberalism is bad. They think that it is inherently oppressive and that. And so, of course, not only are they trying to break it, but they're doing it through means that are not liberal. So, like, you know, that's why they will try to be censoring people. You know, and because free speech is not important. If you think the whole house is bad, who cares about that's important?
MARK BLYTH: And it seems like they got in charge of a lot of things. I think one thing that was crazy as an example was that PayPal, they threatened to take away $2,500 out of your account if you were caught inciting hatred or misinformation, which they were defining as they wanted. This actually red-pilled the former PayPal CEO, David Marcus, who ended up speaking out on the right. MARK BLYTH: Colin Wright. He's on Twitter, and he said something about--
He said, you know, he writes about biological sex and the importance of the binary biological sex for, you know, sports and stuff. And he got PayPal. I think they confiscated $2,500 of his money for writing about that. I mean, I have to look into the details again, but like...
I mean, it's so this is the kind of thing where you're like this. This is that. I don't care what he said. That does not belong in the liberal house. Yeah. Right. It just doesn't make any. There's a lot of things where I'm like this is so this is what I was trying to say in the book to people. And this is why you can't just have left right center. Because if you're if you say I'm on the left, left is good. Blue, good, red, bad.
Then you see this woke movement come along and all you can say is they're on my team. And the people who don't like them, they must be the bad guys. If you only have a horizontal axis, that's the only thing you can see is people fighting this must be bad. It must be my enemies. If you can expand this a little bit and have more of this vertical axis and just think about this liberal house, you can start to say, actually, the liberal, the lower case L liberal conservatives are more on my side than the
people on the left with the wrecking ball outside the house.
And what I was trying to say to people in the House is it should not matter what they say. If they're coming at it and saying, you know, we're anti-racist or they're coming at it and saying we're white supremacists, it shouldn't matter what the content of what they're saying because they literally have a wrecking ball outside the House. So you need to stop caring and say, well, they're on my team and I need to defend them because of what they're saying. It's irrelevant. They're outside the House. That's the first question. Are they in the House? Once you deal with them, now you can go back to the fighting with the conservatives inside the House because, you know, that's secondary.
priority because the house has to stay up first you know and and so it's such a perversion of of liberal social justice like martin luther king said you know his whole thing was this house is good and and the constitution was good the founders did a great thing but blacks are getting a raw deal they're not getting the house is not is not working well for them it's broken in this area let's fix it let's make the house even better and and it's the opposite
it. And I think what's scary also to, to use your, use your framing is that, is that the, the, the goal of the people who are fighting on the, on the left would be to like try to paint the far right as outside of the house too, which some of them are, but you know, I've actually seen a lot of suggestions that our adversaries, uh,
are actually like purposely trying to sponsor these far right accounts to try to like make people on the moderate left afraid even more afraid of the right than they would be because it's good for them to think that there's like this racist crazy you know wacky people there because then that causes them to fight more and to help their crazies as opposed to letting us work together you know it's an interesting challenge totally i mean if you're an adversary of the u.s you love the wrecking balls yeah you want more wrecking balls in both and what you want is the people inside the house so
in such a tribal craze that they instead of banding together against the wrecking balls they start defending their own wrecking ball and associating anyone who disagrees with them with the other wrecking ball yeah and not even seeing their side as a wrecking ball and the whole thing that's how the country can just fall apart i mean um so that was kind of my plea with this is i was writing this really not not from the you know i wasn't writing it to people on the left or people on the right i was just writing it to people to be like you need to think about the
Zoom out and think about this bigger picture and then get back to fighting each other because right now, this is bad. And it's interesting, not to talk too much about current things, but it's fascinating right now with the new administration because we do have...
people who are being very very bold to clean out their version of the left wrecking ball but then of course they're being painted as the right wrecking ball so it's just gonna be and of course you have to know we're actually in the house and we are doing with principles but we're just trying to stop this crazy thing and that's and that's probably like a lot of debate right and they need to articulate that you know it's like people need to because um because it's just so easy you know it's i mean if you go this week on to x and scroll around for a while
And then, which I do as a, you know, as a, basically an exploratory activity, I'll go down blue sky and I'll scroll. And it is KOTU-
opposite realities. I mean, you scroll down one and you think that incredibly evil people are doing a once in a century, you know, this is like, you know, this is like the worst part of the civil war is a coup and like the existential crisis. And you go to the other one and you say, oh my God, the country's restored. The good people are back in charge and dismantling all of this evil corruption. Thank God. And it's like,
Deliberate different realities. You know, you can say what you want about the 50s or 60s. People weren't living in two opposite realities back then. Is this most people or is this just like a few extreme French people online living in these different realities? Like, where are we facing? I think there are. I think, yes, I think when you're in the online world, it seems more like this is the whole world. But what you see more and more is that the media, you know, the mainstream media, which takes, you know, which does reach now a lot of the real world.
takes their cues from the online social media. Yeah. So the things that, you know, the things that the really extreme people are going to be saying on something like Blue Sky, that's going to end up on MSNBC and probably in, you know, in a New York Times op-ed, you know, the next week. So you it is real and it's and it's a problem. I mean, I think. Yeah. So you talked a little bit also in this. What's our problem? And people should go read it. There's just so many cool frameworks. We're not going to have time to get to. We talked about how to conquer a college, how to conquer a society.
Obviously a lot of our institutions were consumed by these movements and by these tribal things. How did that happen? And are we fixing it? Is there a chance we're going to fix it? Like, where are we with this? Yeah. So...
when you know I was trying to say how did you know because universities used to be the the picture of liberal there was a scientist they were the actual scientists thinking yeah and and they were professional there were professional research institutions and they had Veritas as a plaque on you know you walk into Harvard it says Veritas ironic now it's ironic right but that was you know you know Thomas Jefferson found it was like this was a new concept that truth matters here not divinity not any one belief that's why you have tenure so that professors are protected from the political fads that they mean these things are built by
very very liberal people and they of course have fallen into the hands of the opposite extremely illiberal people people who just don't they don't believe in liberalism and how and so the if you actually look at stories which is what I was showing what actually happened here when was the moment when you know in one college or another when the power balance shifted and it's that you know if you're
The faculty and the admin at a place like, and the donors are part of it, are a place like Harvard, you know, you walk in, you see that Veritas stone plaque, you know, it's just like if you're a priest and you walk into a church and see the cross, it's a reminder of what you're hearing, what, you know, the telos of this place. When you walk into Harvard, you know, your job is to uphold this.
that plaque. And what happens is you see these challenges to it where someone would say, we need to fire this professor because they disagreed with the common, you know, the fad today, that thing that we're all, you know, offended by this week. They said it, let's get them out. And that can be, that can be a tenured professor, even, you know, this is again, totally violating the purpose of tenure. And then the administrators, the administrators and the rest of the faculty kind of have a moment of truth.
Because they say we can either say, nope, veritas, we stand up for the principles here. No, this person is allowed to say different things and disagree with you, even if they're offensive. That's part of finding truth is we have a wide range of views. Or they say, yep, let's fire them because they're scared and they don't be the next target. And you can just see this moment of truth, this key moment. And one after the other, whether it's the New York Times or Harvard or, you know, American Medical Association, they just all disagree.
They all gave in. They all gave in. And they all do it in a very deceptive way, right? So someone I really admire is Roland Fryer, who happens to be African-American from like a poor background, raised by his grandma, a great economist, tenure at Harvard, genius, not really on the right or the left, but some of the data from some of the studies about police violence came out to support something that would have been more, I guess you'd call it on the right narrative. And so parts of it also would support the left narrative and he published it.
And people took the right narrative and they were really offended. And of course, Clyde Engay, who at the time wasn't even president yet, but she was like a big internal African-American woman activist at Harvard. She went after him and she found a former student who, and she got the former student from like 10 years ago to say he'd done some inappropriate things. And they used that to like knock him out for a couple of years and like punish him for going against the narrative. And she later became president famously, unfortunately, they got rid of her association.
kind of karma actually deserved it. But it's fascinating that like, it's not that she said, oh, you're not allowed to say these things about this data. They find other ways to get you, you know? No one who's breaking that support beam ever says we're breaking support beams. They pretend that we're just trying to uphold, they pretend they're, you know, we're just trying to uphold the, the, the, the,
the liberal house here and the rules so it's always very deceptive but it's only deceptive if you're not paying very much attention when you look and you can you know see this and you see the pattern you know um new york times you know uh during the the george floyd riots tom cotton writes i'll send in the troops which 62 of americans agreed with it in that moment so it wasn't that controversial of you but amongst new york times readers of course it was a smaller but the point is new york times if they're
They're the paper of record, right? All the news that's set to print, they're supposed to be a paper. When they walk into that office, they're supposed to be the paper that says, we're going to publish what the 62% of the Americans believe this thing, and this senator is going to come say it. That should be here. We want to hear it. Especially if you disagree, we need to hear what the opposition thinks so we can argue against it. So what happened is it was such an uproar that the editor of the op-ed section, James Bennett, ended up having to leave. Right.
That's a moment of truth where the New York Times basically says, you know, the paper record, they cross that out and they say, we now are an arm of this political movement. That's who we are. 100%. And it happened again and again and again. And culturally it happened too. I want to give one other example, which I think...
touches both of us. So we're trying to build a great new university that set the example, obviously with you, ATX. And I think we actually sponsored you to speak at South by Southwest for the university and draw attention and something we're really proud of our association and someone who pursues truth and talks about these things. And I remember because the people running South by Southwest were so extremely on the radical side,
I don't know if it's against you or most probably against the university. They like, they like sidelined you, took you out of the booklet, didn't show it off. Like what, what happened with this? Yeah. I mean, we just saw, I mean, we looked at the, we looked at the things that were being advertised and it was, you know, 10 woke talks in a row. That's it. You know, that, that's what they were showing. Every single one fit with this one worldview. And again, it's that worldview can be represented there because it's part of what's going on, but it was the only worldview that they said, we're going to push and to highlight. And we're, yeah, we're going to highlight. And,
And, you know, again, what I'm, I'm not, I'm not coming in here as a hardcore right-winger. I'm coming in here saying, Hey, basic liberal principles are good. Yeah. Which by the way, like if you go back to like the nineties, everyone who liked Bill Clinton and the Clintons themselves, like would totally agree with that. Like, and that's also what our university stands for, by the way. That's why it's relied on that. Obviously. Yeah. Right. It's just, but it's again, this is what I'm talking about. Childish. I'm like, I'm not saying anything even crazier. This is just basic grownups should be just saying like, yes, of course. Like, of course.
And that is enough to have the South by people say, this guy's bad news. You know, this guy's bad news because again, if you're in the house,
You're not going to want to push my talk away. You're going to say, sure, it's part of what's in the house. Let's talk about it. But some of these people actually in charge are not in the house. It's when the people there are beholden to an ideology that specifically says anything arguing in favor of the house. Again, they're not thinking these exact thoughts, but anything arguing in favor of the house is bad because it's sneakily, you know, and of course, if you are in this worldview, you see anything that disagrees with you as right wing. So people, you know, I got called, you know, ever since Tim went to the far right,
And even just, you know, Tim's whole thing is anti-woke. And I want to say my whole thing is pro-liberal. That's it. I'm just being pro-liberal right now. It's not far right. And it's not like anti-woke is my cause. I'm being pro-liberal and this movement is seriously threatening. - If you owned, so actually a friend's uncle owns South by Southwest. He claims to be in the house. I think I'm going to get to know him better. If you were that guy and you owned it and you saw these crazy people doing this, like what would you do as the owner? - You have to make a decision.
You either are going to, you say, am I going to have the courage to say the founding mission of South by was X, Y, and Z, and we're still going to still do that? Or am I going to let these kind of underlings grab the podium and say that this is the new rule, these are the new rules, and I'm going to sit on the side and just watch them redefine it. You have a choice.
You know, sometimes you might say, look, I want to, I don't want to get attacked. I don't want to have to, I'm just gonna, I'm just going to sit back. I don't want to lose my income stream. Okay. Like I get it. We have lots of things, but you have to acknowledge what you're doing, which is you are, you are picking your, you know, picking some personal ease right now over, you
you know, you're abandoning the thing that you're supposed to. It's a lack of courage. I've noticed most owners in our society of a lot of these properties have just lack courage. I'm hoping 2025 is a year where that courage comes back. So I'm optimistic it's coming back. It's already been coming back for a while. I mean, what you see is the way it works is when you have, you know, the mob is sitting there with their pitchforks and again, they're smaller group, but everyone is sitting there quiet. I don't want to be the one to speak out because I don't want to get, so everyone's quiet together. And then they go and they pitchfork someone who's courageous and everyone just watches. No one stands up for them because they don't want to be next. That
They can hold that for a while, but their power is brittle. And when certain people start saying something that gives permission for other people to say it, before you know it, more people are saying it and they can't pitchfork anyone. And now people are starting to say pitchforking is bad. And before you know it, it takes less and less courage every year. So now more and more people, you need to be less and less brave of a person to start speaking out. And you are seeing, I think, every year becoming more. Now it could swing back.
Yeah, you know, I think it was swinging back leading up to 2020 and then George Floyd happened and it swung way in the other direction. So it won't come back in the exact same form because the society is wise to that now. It'll come back in a slightly different form and we'll all be, you know, the same people will be fooled by it again.
And I'll be on the front getting canceled again, being bold against it and hopefully leading others against it with you. So thank you. Yeah. Well, what's nice for you is that, you know, and me too, in a lot of ways is that we don't work at a, you know, a major institution. Like you are, it's hard for them to really pitchfork you a little tribe, but like,
There's a few institutions that wouldn't work for me for a while because there's people in them that got me, but then they can't really cancel me. Right, exactly. And so that's one of the nice things about today. You know, if you're in Maoist China and this is going on, I mean, A, they're using physical violence. They're executing people. At least they're not doing that here. And B, there was no internet. You know, it is hard for a mob to maintain strict information control in the internet today.
I think, I don't know. It's both easier. It's a mechanism for the mob to pitchfork people. You know, cancel culture started on Twitter. But it also is. It also breaks it down faster. Yes. So we're optimistic. Things are going in the right direction. You were early on AI. You were really early on Mars. By the way, I heard you made a $10,000 bet with someone. We get there by 2030. You feeling good about that? No.
I shouldn't. I made the age old mistake, which is I listened to Elon timelines too much. I mean, he said this was back in 2016 and he said, you know, by 2024, we're going to have the first people on Mars, maybe 2026 because it's every two years. You know, it's Mars has to be next to Earth. That happens every 26, 30. We still have a chance.
i don't see it i would be thrilled i would guess it happens in the 2030s i mean every two years we can go there right after two years so the next one is 2028 or 2026 is the next one yeah sorry 2026 then 2028 then 2028 yeah so so in 2030 be late 2030 i guess because it's a little over two years but but we that's i mean that's a chance to get there i mean these these we're watching you took your daughter you wrote about it to watch these amazing rocket it's one of the most inspiring things oh it's incredible i think they actually i don't and i think that part of the slowdown is just they
they're waiting for, you know, approval for to-do launches. Well, now that Trump, President Trump's in and like, we'll see. I mean, I know that first they're going to send a cargo mission. So, at least have to do, they have to send the stars. Oh, you don't win the bet if there's only cargo. It has to be people. It has to be a human setting. That's tougher. So, I think cargo will get there. They might send humanoid robots too first. My guess is it's cargo and humanoid robots. That's what I'm saying. That's why I feel bad about it. So, you might be like a couple years off. It's going to be close. I would, if I had to guess now, I'd say, you know, 2035, something around there. And that's still, by the way, like,
um one of the great leaps for all life not just humans the great leaps for all of earth life is going to happen in the 2030s like what this is a crazy time to be alive i love it other than ai and mars like you you call these things earlier is there something else we're not paying attention to that's that's exciting right now i mean i think you're probably paying i mean i think you uh like you like led the a round of the companies i'm about to mention like boom supersonic i mean i think we're after talk about f's curves i mean
airplanes skipped an S curve. So we basically haven't progressed in like 40 years with airplanes really. And I think companies like boom, I'm like, okay, wait, actually we might be what, you know, what, what Blake at boom, uh,
Said to me is his goal, his vision is anywhere in the world in four hours or less. The only thing that'll beat that is Elon's rockets if we fly on those. Yeah. So I think transportation, you know, you're going to start going faster. Joby Aviation, I mean, early to the Utah. Joby, exactly. Tunnels. 3D, 3D, you know, people are going to be going underground, above ground. The tunnels can start getting really fast. I'm very bullish on tunnels next five years. I think I wouldn't be surprised if in down the road, when you live in Austin, you live in the tri-city area.
Where there's a new restaurant in Houston or Dallas, you go because it's 30 minutes. We got to get this done for Texas later this decade, 100%. Tri-city area suddenly. And think about you have now that every city, you know, you start a restaurant, you have all three cities as a market. I wrote a piece about this in Arena Magazine, Max's new magazine. I had a piece on how to spend a trillion dollars better. And this is my favorite example is all the tunneling in the tri-city area. Yeah. So I'm really excited about transportation.
And that includes also transporting things. So drones, you know, are going to be everywhere. Humanoid robots seem like they're going to be everywhere. First into industry warehouses, then commercial, you know, pouring your coffee and folding clothes and, you know, the clothing store. Maybe we should make them softer. They're a little scary right now looking at it. Yeah, yeah, yeah. No, no, no. I would say we're...
housekeepers, robot nannies and housekeepers will be in the 2050s probably. - It's usually like pillowy in case you run into them. - Yeah, yeah, no, it's gonna, we're not there yet, but like it's coming. These things that, you know, flying cars, these things that we said, oh, that was just like a stupid sci-fi vision from the 60s. It is happening just later than people thought. - Is that in the health stuff? There's like lots of good stuff. - Biotech and health stuff is big and includes stuff with embryology and fertility, crazy things happening and longevity. I just tried my first cultivated meat
- Did you like it? - Delicious. It was delicious. If you zoomed in on the chicken I had, it's made of chicken cells, just like the real chicken. It is real chicken, it's just the chicken cells grew in a cultivator instead of on a chicken's body.
Same chicken. We've gotten much better at barbecues since I moved to Texas. Our chefs are copying and learning from all the local areas. Yeah. We started American Optimist, Tim, to push back on the cynicism in our society. What's the best case for optimistic future with all these problems you're working on? Like, is the future going to, are we going to get this politics and the technology right? I mean, it seems like it's like there's always volatility on both sides. What's the case for us navigating it? I think that, you know, this conversation we're having now, I think a lot of people are going to be having conversations like it where they are reminded how high the stakes are.
And that we could really be killing ourselves or we could be heading to this utopia
And that I think has what it can do two things. It can either drive, the fear can drive people to be more tribal and more crazy and more childish, or it can have people rise to the challenge. And the fear can make people say, this is, we're not going to be doing culture war stuff right now. We need to be grownups together and like figure this out. I have hope that we're a survival species deep down. We want to survive. And I think once you start having an existential crisis,
People need to feel the fear and also the excitement and they feel those emotions. I think that's going to make them all, everyone say, let's put our grownup hats on. And so I do, you know, maybe it's because I'm an optimistic person, but if you, my gut, I feel like it'll, we'll figure it out, but I can't really, my brain can't tell you exactly why. Well, this is a good reminder to me to put my grownup hat on and try to embrace my higher nature and be a leader who inspires others to do the same. I think if we do that, we are going to make it. So thanks for being here, Tim. Thank you. Thank you.
© BF-WATCH TV 2021