We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How Bubbles Power Breakthroughs

How Bubbles Power Breakthroughs

2025/2/6
logo of podcast What's Your Problem?

What's Your Problem?

AI Deep Dive AI Chapters Transcript
Topics
Jacob Goldstein: 我认为技术突破往往发生在特定时刻,并且具有一些共性。通过研究这些共性,我们可以抽象出推动技术进步的因素。我认为,技术突破往往需要一个社会泡沫,即一群人共享某种狂野的信念,并拥有大量的资金来支持他们的行动。 Byrne Hobart: 我认为社会泡沫之所以重要,是因为它能形成一种反馈循环。当人们开始认真对待某个想法时,其他人也会受到影响,从而形成一种共同的信念。这种共同的信念能够吸引更多的人才和资源,最终推动技术突破。此外,巨额资金的支持也是至关重要的,它能够让人们有足够的资源去尝试各种可能性,即使其中一些尝试最终会失败。

Deep Dive

Shownotes Transcript

Translations:
中文

Pushkin.

Hire professionals like a professional and post your job for free at linkedin.com slash Gladwell. That's linkedin.com slash Gladwell to post your job for free. Terms and conditions apply.

AI is rewriting the business playbook with productivity boosts and faster decision-making coming to every industry. If you're not thinking about AI, you can bet your competition is. This is not where you want to drop the ball, but AI requires a lot of compute power. And with most cloud platforms, the cost for your AI workloads can spiral. That is, unless you're running on OCI, Oracle Cloud Infrastructure. This was the cloud built for AI.

A blazing fast enterprise-grade platform for your infrastructure, database, apps, and all your AI workloads. OCI costs 50% less than other major hyperscalers for compute, 70% less for storage, and 80% less for networking. Thousands of businesses have already scored with OCI.

including Vodafone, Thomson Reuters, and Suno AI. Now the ball's in your court. Right now, Oracle can cut your current cloud bill in half if you move to OCI. Minimum financial commitment and other terms apply. Offer ends March 31st. See if your company qualifies for this special offer at oracle.com slash strategic. That's oracle.com slash strategic.

Hey everybody, it's your favorite play cousin Junior from the Steve Harvey Morning Show. You know, the Toyota Tundra and Tacoma are designed to outlast and outlive, backed by Toyota's legendary reputation for reliability. So get in a Tundra with available i-Force Max Hybrid engine, delivering exceptional torque and towing capacity. Or check out a Tacoma.

with available off-road features like crawl control. It can take you beyond the trails. Toyota trucks are built to last year after year, mile after mile. So don't wait. Get yours today. Visit buyatoyota.com. For deals and more, Toyota, let's go places. There are these moments when people make huge technical advances and it happens all of a sudden, or at least it feels like it happens all of a sudden.

You know, this is happening right now, most obviously with AI, with artificial intelligence. It happened not too long ago with rockets when SpaceX dramatically lowered the cost of getting to space. Maybe it's happening now with crypto. I'd say it's probably too soon to say on that one.

In any case, you can look at technological breakthroughs in different fields and at different times, and you can ask, what can we learn from these? You can ask, can we abstract certain, you know, certain qualities, certain tendencies that seem to drive toward these births of technological progress? There's a recent book called Boom that asked this question, and it comes up with an interesting answer.

According to the book, one thing that's really helpful if you want to make a wild technological leap, a bubble. I'm Jacob Goldstein, and this is What's Your Problem? My guest today is Berne Hobart. He's the author of a finance newsletter called The Diff, and he's also the co-author of a book called Boom! Bubbles and the End of Stagnation. When Berne talks about bubbles, he isn't just talking about financial bubbles where investors drive prices through the roof.

When he says bubble, largely he means social bubbles, filter bubbles, little groups of people who share some wild belief. He really gets at what he means in this one sentence where he and his co-author write, quote, transformative progress arises from small groups with a unified vision, vast funding, and surprisingly poor accountability. Basically living the dream.

Later in the conversation, Bern and I discussed the modern space industry and cryptocurrency and AI. But to start, we talked about two case studies from the U.S. in the 20th century. Bern writes about them in the book, and he argues that these two moments hold broader lessons for how technological progress works. The two case studies we talk about are the Manhattan Project and the Apollo missions. So let's start with the Manhattan Project.

And maybe one place to start it is with this famous 1939 letter from Albert Einstein and other scientists to FDR, the president, about the possibility of the Nazis building an atomic bomb.

Right. So that letter, it feels like good material for maybe not a complete musical comedy, but at least an act of a musical comedy. It's kind of a springtime for Hitler in Germany-ish. There's this whole thing where you have this brilliant physicist, but he is just kind of the stereotypical professor, crazy hair, very absent-minded, always talking about these things where no one...

No normal person can really understand what he's talking about. And suddenly, instead of talking about space-time and that energy and matter in their relationship, suddenly he's saying someone could build a really, really big bomb. And that person will probably be a German. And that has some very bad implications for the rest of the world. So now here we are. The president decides, okay, we need to build a bomb. We need to spend a wild amount of money on it. And this is a thing that you describe as a bubble, which...

It's interesting, right, because it's not a bubble in the sense of market prices. It's the federal government and the military. But it has these other bubble-like characteristics in your telling, right? Maybe other meanings of bubble, the way we talk about a social bubble or a filter bubble. Tell me about that. Why is the Manhattan Project a kind of bubble?

Why is it a bubble? Because there's that feedback loop. Because people take the idea of actually building the bomb more seriously as other people take it more seriously. And the more that you have someone like, more that you have people like Oppenheimer actually dedicating time to the project, the more other people think the project will actually happen. This is actually worth doing. So you have this group of people who

start taking the idea of building a bomb more seriously. They treat it as a thing that will actually happen rather than a thing that is hypothetically possible if this particular equation is right, if these measurements are right, etc. And then they start actually designing them. The Manhattan Project seems like this sort of

Point that gets a bunch of really smart people to coalesce in one place on one project at one time, right? It's sort of solves the coordination problem, the way whatever you might say AI today is doing that, like just brilliant people suddenly are all in one place working on the same thing in a way that they absolutely would not otherwise be.

That is true. And this was both within the U.S. academic community and then within the global academic community, because you had a lot of people who were in Central Europe or Eastern Europe who realized that that is just not a great place for them to be and

tried to get to the UK, US, other allied countries as quickly as possible. And so there was just this massive intellectual dividend of a lot of the most brilliant people in Germany and in Eastern Europe and in Hungary, etc. They were all fleeing and all ended up in the same country. So yeah, you have just this serendipity machine where it was just, if you were a physicist, it was an incredible place for just

overhearing really novel ideas and putting those ideas, putting your own ideas to the test because you had all of the, all the smartest people in the world, pretty much in, in this one little town in New Mexico. Right. So the, so the Los Alamos piece is,

It's the famous part, you know, it's the part one has heard of with respect to the Manhattan Project. There's a less famous part that's really interesting and that also seems to hold some broader lessons as well, right? And that is the kind of basically the manufacturing part of the how do we enrich enough uranium to build a bomb if the physicists figure out how to design. Talk about that piece and the lessons there.

Yes. So that one, you're right, it is often under-emphasized in the history. It was more of an engineering project than a research project, but there was a lot of research involved. The purpose was get enough enriched uranium, so the isotope that is actually prone to these chain reactions, get it isolated and then be

be able to incorporate that into a bomb. They were also working on other fissile materials because there were multiple plausible bomb designs. Some used different triggering mechanisms, some used different materials, and there were also multiple plausible ways to enrich enough of the fissile material to actually build a bomb. And

So one version of the story is you just go down the list and you pick the one that you think is the most cost effective, most likely. And so we choose one way to get just U-235 and we have one way to build a bomb. U-235 is the enriched uranium bomb.

And that, by the way, is the way normal businesses do things in normal times. You're like, well, we got to do this really expensive thing. We got to build a factory and we don't even know if it's going to work. Let's choose the version that's most likely to work. Like that is the kind of standard move, right? Yeah.

Right. And then the problem, though, is that if you try that and you just you got unlucky, you picked the wrong bomb design and the right fizzle material or right material of wrong bomb design. You've done a lot of work which has zero payoff. And you've lost time, right? Like, crucially, there is there is a huge sense of urgency present at this moment that is driving the whole thing, really.

Right. We could also do more than one of them in parallel. And that is what we did. And on the manufacturing side, that was actually just murderously expensive. If you are building a factory and you build the wrong kind of factory, then you have you've wasted a lot of a lot of money and effort and time.

So they just, uh, they did more than one. They, they did, um, several different processes for enriching uranium and for plutonium all at the same time. Right. And they knew they weren't going to use all of them. They just didn't know which one was going to work. So it was like, well, let's try all of them at the same time. And hopefully one of them will work. Yes. Like that is super bubbly, right? That is wild and expensive. That is just throwing wild amounts of money at something in a great amount of haste. Yes. Yeah. Yeah.

And if you, so if you believe that there is this pretty linear payoff, then every additional investment you make, you know, has, it doesn't qualitatively change things. It just means you're doing a little bit more of it. But if you believe there's some kind of nonlinear payoff where either this facility doesn't basically doesn't work at all, or it works really, really well, then when you, when you diversify a little bit, you do actually get just this, this better risk adjusted return, even though you're objectively taking more risk. Interesting. Right. So in this instance, it's,

If the Nazis have the bomb before we do, it's the end of the world as we know it. And so we better take a lot of risk, and that's actually rational. It reminds me a little bit of...

Of aspects of Operation Warp Speed, I remember talking to Susan Athea, Stanford economist, early in the pandemic, who was making the case to do exactly this with vaccine manufacturing in like, you know, early 2020. We didn't know if any vaccine was going to work. And it takes a long time to build a factory to make a vaccine, basically, or to tailor a factory. And she was like, just make a bunch of factories to make vaccines, because if one of them works, we want to be able to start working on it that day. Like, that seems quite similar to this. And it worked.

Yeah. Yeah. I think that's absolutely true that you, you know, the higher, the higher the stakes are, the more you want to be running everything that can plausibly help in parallel. And depending on the exact nature of what you're doing, there can be some spillover effects. It's, you know, it's possible that you build a factory for manufacturing vaccine A and vaccine A doesn't work out, but you can retrofit that factory and start doing vaccine B. And, you know, there's, there are little ways to shuffle things around.

a bit, but you often want to go into this basically telling yourself if we didn't waste money and we still got a good outcome, it's because we got very, very lucky and that we only know we're being serious if we did in fact waste a lot of money. And I think that kind of, that kind of inverting your view of risk is often a really good way to think about these big transformative changes. And this is actually another case where the financial metaphors do give useful information about just real world behaviors because

At hedge funds, this is actually something that risk teams will sometimes tell portfolio managers is you are making money on too high a percentage of your trades. This means that you are not making all the trades that you could. And if you took your hit rate from 55% down to 53%, we'd be able to allocate more capital to you, even though you'd be annoyed that you were losing money on more trades. Interesting. Because overall, you would likely have a more profitable outcome. Yeah.

by taking bigger risks and incurring a few more losses, but your wins would be bigger and make up for the losses. Yes. And this kind of thinking, you know, it's very easy if you're the one sitting behind the desk just talking about these relative trade-offs. It's a lot harder if you are the first person working with uranium in the factory and we don't quite know what the risks of that are. But it is just a generally true thing about trade-offs that if you...

about tradeoffs and risks that there is an optimal amount of risk to take. That optimal amount is sometimes dependent on what the, what the downside risk of inaction is. And so sometimes if you're, if you're too successful, you realize that you are actually messing something up. Yeah. You're not taking enough risk. So we all know how the Manhattan project ends. It worked. Um,

I mean, it is a little bit of a weird one to start with. You know, the basic ideas like technological progress is good, risks are good. And we're talking about building the atomic bomb and dropping it on two cities. And it's, you know, it's morally a much easier question if you think it's the Nazis. Sorry, but the Nazis are absolutely the worst, and I definitely don't want them to have a bomb first. You know, there is the argument that more people would have died in a conventional invasion without the bomb. I don't know. I mean, how do you...

What do you make of it? Like, obviously the book is very pro technological progress. This show is basically pro technological progress, but like the bomb isn't, isn't a happy one to start on. Like, what do you make of it? Right. Yeah. It's one of those things where you do, uh, it does make me wish that we could, we could run the same simulation, you know, a couple of million times and see what the, what the net, you know, loss and save lives are different scenarios. But one thing, well,

the bomb, if you, um, I guess from like a purely utilitarian standpoint, I,

suspect that there have been net lives saved because of less use of coal for electricity generation, more use of nuclear power, and that is directly downstream of the bomb. You can build these by design uncontrollable releases of atomic energy. You can also build more controllable ones. And I think getting the funding for that would be a lot harder. And presumably we got nuclear power much sooner than we otherwise would have because of the incredibly rapid progress of the Manhattan Project.

That's the case there. Fair. Which I don't think, you know, if you let me push the button on would I drop an atomic bomb on a civilian population in exchange for fewer people dying of respiratory diseases over the next couple decades, you know, I would have to give it a lot of thought. I'm not going to push that button, but I'm never going to have a job where I have to decide because I can't deal. Okay. Okay.

A thing you mentioned in the book, kind of in passing, that was really interesting and surprising to me, was that nuclear power today accounts for 18% of electric power generation in the U.S. 18%. Like, that is so much higher than I would have thought, given sort of how little you hear about existing nuclear power plants, right? Like, that is a lot.

Yeah, yeah, it is. It is a surprisingly it's a surprisingly high number, but also nuclear power. It is one of the most annoying technologies to talk about in the sense that it doesn't do anything really, really exciting other than provide essentially unlimited power with minimal risk. And some amount of some amount of.

Scary tail risk, right? Like, I mean, that is what is actually interesting to talk about sort of, unfortunately, for the world, given that it's has a lot of benefits. There is this tail risk. And once in a while, something goes horribly wrong, even though on the whole, it seems to be clearly less risky than, say, a coal fired power plant.

Right. And the industry has, they, they're aware of those risks and nobody wants to be responsible for that kind of thing. And nobody wants to be testifying before Congress about ever having cut any corner whatsoever in the event that a disaster happens. So, um,

They do actually take that incredibly seriously. And so nuclear power does end up being, in practice, much safer than other power sources. And then you add in the externality of it doesn't really produce emissions, and uranium exists in some quantities just about everywhere. No climate change, no local air pollution has a lot going for it. Always on. Okay, let's go to the moon.

So you write also about the Apollo missions, U.S. going to the moon. It's the early 60s, right? Was it 61? Kennedy says, we're going to go to the moon by the end of the decade.

There's the Cold War context. Yes. Kennedy announces this goal. What's the response in the U.S. when Kennedy says this? Yeah, so a lot of the response, you know, at first people are somewhat hypothetically excited. As they start realizing how much it'll cost, they go from not especially excited to actually pretty deeply opposed. Right.

And, you know, this shows up in, there was a, someone coined the term Moondoggle. Yeah, Moondoggle. I loved Moondoggle. I learned that from the book. It was Norbert Wiener, like a famous technologist, not a, not a, not a crank, right? Somebody who knew what he was talking about was like, this is a crazy idea. It's a Moondoggle.

Right. And, you know, this really worked its way to popular culture. Like if you if you go on Spotify and listen to the Tom Lehrer song, Werner Von Braun, the recording that Spotify has, it opens with a monologue that is talking about how stupid the idea of the Apollo program is.

And this is, again, someone who is in academia, who's a very, very sharp guy, and who just feels like he completely sees through this political giveaway program to big defense contractors and knows that there's no point in doing this. You write that NASA's own analysis found a 90% chance that

that a failure of failing to reach the moon by the end of the decade like it wasn't just outside people being critical it was nasa itself didn't think it was gonna work um there's a phrase you use in the book uh to talk about talk about these sort of bubble-like environments that are of interest to you and and i found it really interesting and i think it i think we can talk about it in the context of apollo that phrase is definite optimism tell me about that phrase

Yes. So definite optimism is the view that the future can and will be better in some very specific way that there will be there. There is something we cannot do now. We will be able to do it in the future and it will be good that we can do it. And why is it important? Like it's it's a big deal in your telling in an interesting way. Why is it so important?

It's important because that is what allows you to actually marshal those resources, whether those are the people or the capital or the political pull, to put them all in some specific direction and say, we're going to build this thing, so we need to actually go step by step and figure out, okay, what specific things have to be done, what discoveries have to be made, what laws have to be passed in order for this to happen. Yeah.

So it's definite optimism in the sense that you're saying there is a specific thing we're going to build. It's the kind of thing that can keep you going when you encounter temporary setbacks. And that's where the optimism part comes in. Because if you have a less definitely optimistic view about that project, you might say the goal of the Apollo program is to figure out if we can put a person on the moon.

But I think what that leaves you open to is the temptation to give up at any point. Because at any point, you can have a botched launch or an accident or you're designing some component and the math just doesn't pencil out. It's going to weigh too much to actually make it onto the graph. And you could say, okay, well, that's how we figured out that we're not actually doing

But if you do just have this kind of delusional view that, no, if there's a mistake, it's a mistake in my analysis, not in the ultimate plan here, and that it is physically possible, we just have to figure out all the details, then I think that does set up a different kind of motivation. Because at that point, you can view every mistake as just exhausting the set of possibilities and letting you narrow things down to what is the correct approach.

What you sort of needed was this sort of very localized definite optimism where someone, you could imagine a researcher thinking to themselves or an engineer or someone throughout the project thinking to themselves that, okay, this will probably not work overall. But the specific thing I'm working on, whether it is designing a spacesuit or designing this rocket or programming the guidance computer, that one, it's

I can tell that my part is actually going to work, or at least I believe that I can make it work. And two, this is my only chance to work with these really cool toys. So if the money is going to be wasted at some point, let that money be wasted on me. And I think that that kind of attitude of just...

you know that you have one shot to actually do something really interesting. You will not get a second chance. If everyone believes that, it does become a coordinating mechanism where now they're all working extremely hard. They all recognize that the success of what they are doing is very much up to them. And then that ends up contributing to this group's success. So it's like this, if I'm going to do this, I got to do it now. Everybody's doing it now. We got the money now. This is our one shot. We better get it right. We better do everything we can to make it work.

Yes. Fear of missing out. Yeah, FOMO, right. So FOMO, it's funny. People talk about that as like a dumb investment thesis, basically, right? It's like a meme stock idea. But you talk about it in these more...

Interesting context, basically, right? More meaningful, I would say. Yeah, yeah. So in the purely straightforward way, the idea is there are sometimes these very time-limited opportunities to do something. And if you're capable of doing that thing, this may be your only chance. And so missing out is actually something you should be afraid of.

So, you know, if you if you actually have a really clever idea for an AI company, this is actually a time where you can at least make the attempt. So, yeah, we do argue that missing out is something you should absolutely fear. So what happens with the Apollo project? Just in brief, like talk about just how big it is and how risky it is. Like it's it's striking, right?

Right. Yeah. So it is running. The expenses were running on like a low single digit percentage of GDP for a while. So a couple percent of the value of everything everybody in the country does is going into the Apollo mission. Just this one plainly unnecessary thing that the government has decided to do.

Right. And this is one of the cases where there was a very powerful spillover effect because the Apollo guidance computer needed the most lightweight and least power consuming and most reliable components possible. And if you were building a computer conventionally at that time and you had a budget, you would probably build it out of vacuum tubes. And you knew that the vacuum tubes, they're bulky, they consume a lot of power, they throw off a lot of heat, they burn out all the time, but they are fairly cheap. But

In this case, there was an alternative technology. It was extremely expensive, but it was lightweight, didn't use a lot of power, and did not have moving parts. And that's the integrated circuit. So transistor-based computing. The chip.

Well, we know today as the chip. Yes, the chip. You read that in 1963, NASA bought 60% of the chips made in the United States. Just NASA, not the whole government, just NASA. 60%. They actually bought more chips than they needed because they recognized that the chip companies were run by...

you know, very, very nice electrical engineering nerds who just love designing tiny, tiny things and that these people just don't know how to run a business. And so where they were worried that Fairchild semiconductor would just run out of cash at some point. And then NASA would have half of a computer

and no way to build the rest of it so they actually overordered they used the they used integrated circuits for a few applications that actually were not so dependent on power consumption and weight and things so that critique of of the apollo program was directionally correct it was money being splashed out to defense contractors who were favored by the government but in this case it was being done in a more strategic and thoughtful way and um kind of kept the industry going so uh

You talk a fair bit in the book about the sort of religious and quasi-religious aspects of these little groups of people that come together in these bubble-like moments to do these big things. And that's really present in the Apollo section. Like, talk about the sort of religious ideas that

associated with the Apollo mission that the people working on the mission had. Yeah, I mean, you name it after a Greek god, and you're already starting a little bit religious. So there were people who worked on these missions who felt like this is part of mankind's destiny, is to explore the stars, and that there's this whole universe that is a universe created by God. And it would be kind of weird, you know, we

You can't second guess the divine, but it's a little weird for God to create all of these astronomical bodies that just kind of look good from the ground and that you're not actually meant to go visit. You talk about somewhat similar things in other kind of less obviously spiritual dimensions of people coming together and having a kind of more than rational. You use this word thymus from the Greek meaning spirit. Like what's going on there more broadly? Why is that important more generally for technological progress?

Because the, so Thymus is part of this, this tripartite model of the soul where you have your appetites and your reason, and then you're, you're Thymus, you're like, you're, you're longing for glory and honor and this kind of transcendent, transcendent achievement. And, well,

logos reasoning, it only gets you so far. You can, you can reason your way into some pretty interesting things, but at some point you do decide that the, the reasonable thing is probably to take it a little bit easy and, and not take, not take certain risks. And it still is just this, this pursuit of something, something greater and, you know, something, something beyond the ordinary, something really beyond the logos, right? Like beyond what you could get to just by reasoning one step at a time. And, um,

I think that that is, yeah, that is just a deeply attractive proposition to many people. And it's also a scary one because at that point, you know, if you're doing things that are beyond what is the rational thing to do, then of course you have no rational explanation for what you did wrong if you mess up. And you are sort of betting on some historical contingencies. That's the definite optimism part, right? Betting on historical contingencies is another way of saying definite optimism, right? Um...

So back to the moon. So we get to the moon. In fact, against all odds, we make it. And there's this moment where it's like, you know, today, the moon, tomorrow, the solar system. But in fact, it was today, the moon, tomorrow, not even the moon. Right. Like what happened? Well, you know, you had asked about what these mega projects have in common with financial bubbles. And one of the things they have in common is sometimes there's a bust. Right.

And sometimes that bust is actually an overreaction in the opposite direction. And people take everything they believed in, say, 1969 about humanity's future in the stars, and they say, okay, this is exactly the opposite of reality.

where things will actually go and the exact opposite of what we should care about, that we have plenty of problems here on Earth. And why would we, you know, do we really want to turn Mars into just another planet that also has problems of racism and poverty and nuclear war and all that stuff? So maybe we should stay home and fix our own stuff. In public policy, you'd actually need for there to be some kind of

resurgence in belief in space you need some kind of charismatic story and perhaps to an extent we have that right now um yes maybe maybe elon's not the perfect front man for all of this but he is certainly someone who demonstrates that space travel it can be done it can be improved and that um that it's just objectively cool that it is just hard to watch a spacex launch video and not feel something yes so so good i want to talk more about space um in a minute

So it's interesting, these two stories that are kind of in the middle of your book, they're kind of the core of the book, right? These two interesting moments that are non-financial bubbles when you have this incredible technological innovation in a short amount of time that seems unrealistic, unrealistically, you know, fast, impressive outcome. And they're both pure government projects. They're both, you know, command and control economy. It is not the private sector. It is not capitalism. Right.

What do you make of that? I would say there's a very strong indirect link for a couple of reasons. One is just the practical kind of reason that personnel is policy and that the U.S. government in the 1930s, the U.S. government was hiring and the private sector mostly wasn't. And so all the ambitious people, basically all the ambitious people in the country tried to get government jobs.

And that is usually not the case. And there are certainly circumstances where that's a really bad sign. But in this case, it was great. It meant that there were a lot of New Deal projects that were staffed by the people who would have been rising up the ranks at RCA or General Electric or something a decade earlier. Now they're running New Deal projects instead. And they're, again, rising up the ranks really fast, having a very large real impact very early in their careers.

And those people had been working together for a while and they knew each other. There was a lot of just institutional knowledge about how to get big things done within the U.S. government. And a lot of that institutional knowledge could then be

So you have the New Deal and then the war effort. And then you have this post-war economy where there's still, you know, it takes a while for the government to fully relax its control. And then very soon we're into the Korean War. So, yeah, there was just a large increase in state capacity and just in the quality of people making decisions within the U.S. government in that period. We'll be back in a minute to talk about bubble-esque things happening right now, namely rockets, cryptocurrency and A.I.

Okay, business leaders, are you playing defense or are you on the offense? Are you just, excuse me, hey, I'm trying to talk business here.

As I was saying, are you here just to play or are you playing to win? If you're in it to win, meet your next MVP, NetSuite by Oracle. NetSuite is your full business management system in one suite. With NetSuite, you're running your accounting, your financials, HR, e-commerce, and more, all from your online dashboard. One source of truth means every department's working from the same numbers with no data delays. And with AI embedded throughout, you're automating manual tasks, plus getting fast insights for your next move.

Whether you're competing on your home turf or looking to conquer international markets, NetSuite helps you get the W. Over 40,000 businesses have already made the move to NetSuite, the number one cloud ERP. Right now, get the CFO's Guide to AI and Machine Learning at netsuite.com slash stereo. Get this free guide at netsuite.com slash stereo. Okay, guys.

I was joking with my producer Jacob the other day, who's one of Pushkin's most valuable employees. I hired him to be my assistant years ago in the most random manner possible. I think he saw a message board posting somewhere and I interviewed him for basically 10 minutes and said, go for it. I made a wild gamble on someone and got incredibly lucky.

But let's be honest, you can't rely on getting lucky when it comes to hiring people. Lightning's not going to strike more than once. You need a system and you need tools. And that's why LinkedIn is so important. LinkedIn is more than just a job board. They help connect you with professionals you can't find anywhere else. Even people who aren't actively looking for a new job.

In a given month, over 70% of LinkedIn users don't visit other leading job sites. So if you're not looking on LinkedIn, you're looking in the wrong place. Hire professionals like a professional and post your job for free at linkedin.com slash gladwell. That's linkedin.com slash gladwell to post your job for free. Terms and conditions apply.

You just realized your business needed to hire someone yesterday. How can you find amazing candidates fast? Easy, just use Indeed. Stop struggling to get your job posts seen on other job sites. With Indeed Sponsored Jobs, your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster. According to Indeed data, sponsored jobs posted directly on Indeed have 45% more applications than non-sponsored jobs.

Don't wait any longer. Speed up your hiring right now with Indeed. And listeners of this show will get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash P-O-D-K-A-T-Z 13. Just go to Indeed.com slash P-O-D-K-A-T-Z 13 right now and support our show by saying you heard about Indeed on this podcast. Terms and conditions apply. Hiring? Indeed is all you need.

OK, now to space today. Bern and I talked about SpaceX in particular because, you know, it really is the company that launched the modern space industry. And there's this one key trait that SpaceX shares with the other projects Bern wrote about in the book. It brought together people who share a wild dream. If you go to work at SpaceX, it's probably because you believe in getting humanity to Mars.

Yeah. Yeah. It's not just that you believe in the dream, but when you get the job, you're suddenly in an environment where everyone believes in the dream. And if you're working in one of those organizations, you're probably not working nine to five, which means you have very few hours in your day or week where you are not completely surrounded by people who believe that humanity will, people will be living on Mars and that this is the organization that will make it happen.

And that just has to really mess with your mind. Like what is, what is normal to an engineer working at SpaceX in 2006 is completely abnormal to 99.9% of the human population. And, you know, most of the exceptions are like six year old boys who just watched Star Wars for the first time. Yeah. I mean, really, as I went through the book, I was like, oh, really? The bubble you're talking about is a social bubble.

Like the meaningful bubble, like maybe there's a financial bubble attached to it. Maybe there isn't. But what really matters is you're in this weird little social bubble that believes some wild thing together that believes it is not wild, that believes it is going to happen. Like that's the thing. Yeah. And has money and has the money to act on their wild belief. Yes.

And so, you know, getting the money does mean interacting with the normies sphere, interacting with people who don't quite buy into all of it. But when you have these really ambitious plans and you're taking them seriously, you're doing them step by step. Some of those steps do have other practical applications. And so that is the SpaceX story. It was not just straight shot. We are going to invest all the money Elon got from PayPal into going to Mars. And hopefully we get to Mars before we run out.

It was, you know, we're going to build these prototypes. We're going to build reusable rockets. We're going to use those for existing use cases. And we will probably find new use cases. And then once we get really, really good at launching things cheaply, well, there are a lot of satellites out there. And perhaps we should have some of our own. And if we can do it at sufficient scale, then maybe we can just throw a global communications network up there in the sky and see what happens next.

So, yeah, that's, you know, the intermediate steps, each one, it's basically taking the themos like the spirited, you know, here's our grand vision of the future. And, you know, here's my destiny. And I was put on Earth to do this and say, OK, well, the next step is have enough money to pay rent next month. What do I got to do tomorrow to get to Mars? So is there a space bubble right now?

I think so. I think there is. I think there are people who look at SpaceX and say this is achievable and that more is achievable. They also look at SpaceX and say this is a kind of infrastructure that there are things like doing manufacturing in orbit or doing manufacturing on the moon where in some cases that is actually the best place to build something. Basically, because SpaceX has driven down the cost so much of getting stuff into orbit, new stuff,

new ideas that would have been economically absurd 20 years ago, like manufacturing in space are now plausible. And so this is a sort of bubble building on itself. And like, why is it not just an industry now? Why is it a bubble in your telling? It is the feedback loop where what SpaceX does makes more sense if they believe that there will be a lot of demand to move physical things off of Earth and into orbit and perhaps further out.

that if they believe that there's more demand for that, they, they should be investing more in R and D. They should be building better and bigger and better rockets. And they should be doing, doing the, you know, big fixed cost investment that incrementally reduces the cost of launches and only pays for itself. You do a lot of them. And then,

If they're doing that and you have your dream of we're going to manufacture drugs in space and they will be, you know, like the marginal cost is low once you get stuff up there. Well, that dream is a little bit more plausible if you can actually plot that curve of how much does it cost to get a kilogram into space and say this, you know, there is a specific year at which production.

point, we would actually have the cost advantage versus terrestrial manufacturing. So it's this sort of coordinating mechanism that like you also write about with Microsoft and Intel in the like 80s, 90s, where it's like, oh, they're building better chips, so we'll build better software. And then because they're building better software, we'll build better chips. So this is like a more exciting version of that, right? Because it's going to get even cheaper to send stuff to space. We can build this crazy...

factory to exist in space. And then that tells SpaceX, oh, we can in fact keep, keep building, keep innovating, keep spending money. Yes. And so someone, someone has to do just half of that, like the half of that, that makes no sense whatsoever. That was, that was SpaceX at the beginning, right? That was like just a guy with a lot of money and a,

Crazy. Yeah. It just really helps to have someone who's eccentric and has a lot of money and is willing to throw it at a lot of different things. Like Musk, um, he, he spent some substantial fraction of his net worth right after this PayPal sale on a really nice sports car. And then, um,

immediately took it for a drive and wrecked it, had no insurance, and was not wearing a seatbelt. So the Elon Musk story could have just been this proverb about dot-com excess and what happened when you finally gave these people money is they immediately bought sports cars and wrecked them. Instead, it's a story about a different kind of excess, but it's still...

I guess what that illustrates is the risk tolerance. Yeah. Yeah. There's a risk level where you are going for a joyride in your $2 million car and you haven't bothered to fill out all the paperwork or buy the insurance. And that is the risk tolerance of someone who starts a company like SpaceX. Okay. Enough about space. Let's talk about crypto, formerly known as cryptocurrency. Okay.

Let's talk about Bitcoin. And let's talk about Bitcoin, especially at the beginning, right? Before it was number go up when it was, it really was true believers, right? It was people who had a crazy worldview like you're talking about in these, in these other contexts.

Yes. So we still don't know for sure who Satoshi Nakamoto was. And I think everyone in crypto has at least one guess, sometimes many guesses. But whoever Satoshi was, wherever they were. This is the creator of Bitcoin for the one person who doesn't know. Yeah.

They had this view that one of the fundamental problems in the world today is that if you are going to transfer value from one party to another, you need some trusted intermediary. You need a trusted intermediary like a government agency.

And a bank. Typically, in money, you need both governments and banks the way it works in the world today. Right. Yes. And Satoshi happened to publish the Bitcoin white paper in October 2008, which was a great moment to find people who really didn't want to have to deal with governments and banks when they were dealing with money. Right. Right in the teeth of the financial crisis.

Yes. So yes. So it is in one sense, just this, this technically clever thing. And then in another sense, it's this very ideological project where he doesn't like central banks. He doesn't like regular banks. He feels like all of these institutions are corrupt and you know, your money is just an entry in somebody's database and they can update that database tomorrow and either change how much you have or change what it's worth. And we need to just build something new from a clean slate. And there's also, I think there's this tendency among a lot of tech people to

when you look at any kind of communications technology and money broadly defined as a communication technology, you're always looking at something that has evolved from something simple and it has just been patched and altered and edited and tweaked and so on until it works the way that it works. But that always means that you can easily come up with some first principles view that's a whole lot cleaner, easier to reason about, omits some mistakes. And then you often find that, okay,

you omitted all the mistakes that are really, really salient about fiat, but then you added some brand new mistakes or added mistakes that we haven't made in hundreds of years. And so there it's full of trade-offs. It gets complicated, but at the beginning, right? So the white paper comes out and, you know, I, I,

I did a story about Bitcoin in 2011, which was still quite early. You know, we were shocked that it had gone from $10 a Bitcoin to $20 a Bitcoin. Thought we were reading it wrong. And at that time, like, I talked to Gavin Andreessen, who was very early in the Bitcoin universe. Like, he was not in it to get rich, right? Like, he really believed. He really believed in it. And...

That was the vibe then. And like, he thought it was going to be money, right? The dream was people will use this to buy stuff. And one thing that is interesting to me is, yes, some people sort of use it to buy stuff, but basically not, right? Like that.

It would go from $20 a Bitcoin to $100,000 a Bitcoin without some crazy killer app, without becoming the web, without becoming something that everybody uses, whether they care about it or not. That I would not have guessed. And it seems weird. And plainly now, crypto is full of some people who are true believers and a lot of people who just want to get rich, and some of whom are pretty scammy.

Yeah. Yeah. There's like the grifter coefficient always goes up with the, with the price. And then, you know, the, the true believers are still there during the next 80% drawdown. And I'm sure there will be a drawdown, something like that at some point in the future. It's just that that's kind of the nature of, of these kinds of assets.

Bitcoin, it was originally conceived as more of a currency. And Satoshi talked about some hypothetical products you could buy with it. And then the first Bitcoin killer app, to be fair, was e-commerce. It was specifically drugs. Yeah, crime. Yes. It is a very, very libertarian project in that way. So it doesn't work very well as a dollar substitute for many reasons, most of the obvious reasons.

But it is interesting as a gold substitute where part of the point of gold is that it is very divisible and your gold is the same as my gold. And we've all kind of collectively agreed that gold is worth more than its value as just an industrial product. Yeah.

And the neat thing about gold is it's really hard to dig up anymore. Gold supply is extremely inelastic. And Bitcoin is designed to have a finite supply, right? Yes. Important analogy, yeah. Yes. More generally, like, it's what? It's a long time out now. It's, you know, 17 years or something since the white paper. What do you make of this sort of...

costs and benefits of cryptocurrency so far the costs are more obvious to me like there's a lot of grift it's you know by design very energy intensive like i'm open to like better payment systems there's lots of just like boring efficiency gains you would think we could get that we haven't gotten right um yeah what do you think about the cost versus the benefits so far

I think in terms of the present value of future gains, probably better off. I think in terms of, yeah, realized gains so far, worse off. So basically, worse off so far, but in the long run, we'll be better off. We just haven't got the payoff yet. This is actually something that general purpose technologies, it is a feature of general purpose technologies, that there's often a point early in their history where the net benefit has been negative. What, what?

What would make it clearly positive? Like what's the killer return you're hoping to see from cryptocurrency? Yeah. So I think, I think the killer return would be if there is a, a financial system that is open in the sense that starting, starting a financial institution, starting a bank or an insurance company or something is basically you write some code and you click the deploy button and your code is running. You have capitalized your little entity and now you can provide whatever it is like,

mean tweet insurance. You're selling people for a dollar a day, you'll pay them $100 if there's a tweet that makes them cry. Weird incentives in your insurance business, I'm going to tell you right now. You know, you get to speed run all kinds of financial history. I'm sure you learn all about adverse selection. But like a financial system where anything can be plugged into something else and basically everything is an API call away is just a really interesting concept. And the fiat system is moving in that direction, but slowly. And just to be clear, like

Why is it why is that better on balance? So for it to for it to be net positive, that has to be not only interesting, but that has to like lead to more human flourishing and less suffering than we would have in its absence. Right.

Yeah. Markets, markets provide large positive externalities. There's a lot of effort in those markets that feels wasted, but it is like markets transmit information better than basically anything else because what they're always transmitting is the information you actually care about. So like oil prices, you don't have to know that oil prices are up because there was a terrorist attack or because someone drilled a dry hole or whatever you, what you respond to is just gaslighting.

gas is more expensive and therefore I will drive less or, you know, energy is cheaper or more expensive. And so I need to change my behavior. So it's always transmitting the actually useful information to the people who would want to use it. And the more complete markets are, and the more, the more things there are where that information can be instantaneously transmitted to the people who want to respond to it, the more everyone's real world behavior actually reflects whatever the underlying material constraints are on doing what we want to do. The sort of crypto dream there is just more

more, more finance markets, more feedback, more market feedback, better,

financial services as a result. That's the basic view you're arguing for. And it's just a really interesting way to build up new financial products from first principles. And sometimes you learn why those first principles are wrong, but that itself is valuable. Like there is actual value in understanding something that is a tradition or a norm and understanding why it works and therefore deciding that that norm is actually a good norm. Good.

Last one. All right. You know what it's going to be. You tell me what the last one is. Is AI a bubble? Yeah. But you sound so sad about it. Of course we've got to talk about AI, right? Are you sad we talk about AI? I love talking about AI. It seems exactly like what you're writing about. Yeah. When you hear Sam Altman talk about creating open AI, starting open AI, he's like,

We basically said, you know, we're going to make AGI, artificial general intelligence, come work with us. And when he talks about it, it's like there was a universe of people who were like the smartest people who really believed who that's what they wanted to do. So they came and worked with us, which seems like exactly your story.

Yes. It turns out that a lot of people have had that dream. And for a lot of people, maybe it wasn't what they were studying in grad school, but it was why they ended up being the kind of person who would major in computer science and then try to get a PhD in it and would go into a more researchy end of the software world. So yeah.

Yeah, there were people for whom it was incredibly refreshing to hear that someone actually wants to build the thing. So you have that kind of shared belief. I mean, at this point, you have these other elements of what you're talking about, right? Like a sense of urgency, an incredible amount of money, elements of spiritual or quasi-spiritual belief.

Yes, there are pseudonymous OpenAI employees on Twitter who will tweet about things like building God. So yeah, they're taking it in a weird spiritual direction. But I think there is something...

It is interesting that a feature of the natural world is that you can actually, if you put enough of a, you know, you arrange refined sand and a couple metals in exactly the right way and type in the right incantations and add a lot of power, that you get something that appears to think and that can trick someone into thinking that it's a real human being.

The is it good or is it bad question is quite interesting here. Obviously, too soon to tell. But striking to me in the case of AI that the people who seem most worried about it are the people who know the most about it.

which is not often the case, right? Usually the people doing the work, building the thing, just love it and think it's great. In this case, it's kind of the opposite. Yeah, I think the times when I am calmest about AI and least worried about it taking my job are times when I'm using AI

AI products to slightly improve how I do my job. That is, you know, better natural language search or actually most of it is processing natural language when there are a lot of pages I need to read, which contain, you know, if it's like a thousand pages of which five sentences matter to me, that is a job for the API and not a job for me, but it is now a job that the API and I can actually get done and

And my function is to figure out what those five sentences are and figure out a clever way to find them. And then the AI's job is to do the grunt work of actually reading through them. That's AI as useful tool, right? That's the happy AI story. Yeah.

And I actually think that preserving your own agency is a pretty big deal in this context. So I think that if you are making a decision, it needs to be something where you have actually formalized it to the extent that you can formalize it, and then you have made the call. But for a lot of the grunt work, AI is just a way to massively parallelize having an intern. Plainly, it's powerful. And you're talking about what it can do right now.

I mean, the smartest people are like, yes, but we're going to have AGI in two years, which I don't know if that's right or not. I don't know how to evaluate that claim, but it's a wild claim. It's plainly not obviously wrong on its face, right? It's possible.

Can you even start to parse that? You're giving sort of little things today about, oh, here's a useful tool and here's a thing I don't use it for. But there's a much bigger set of questions that seem imminent. You know, there are certain kinds of radical uncertainty there. You know, I think it increases wealth inequality, but also means that intelligence is just more abundant and is available on demand and is baked into more things.

I think that you can definitely sketch out really, really negative scenarios. You could sketch out not end of the world, but maybe might as well be for the average person scenarios where every white collar job gets eliminated and then a tiny handful of people have just unimaginable wealth and rearrange the system to make sure that doesn't change.

But I think there are a lot of intermediate stories that are closer to just the story of, say, accountants after the rise of Excel, where there were parts of their job that got much, much easier. And then the scope of what they could do expanded. It was the bookkeepers who took it on the chin, it turns out. Like, Excel actually did drive bookkeepers out of work and it made accountants more powerful. Yeah.

Yeah. So you, you know, within, I think within a kind of company function, you'll have specific job functions that do mostly go away and then a lot of them will evolve. And so the way that AI seems to be rolling out in big companies in practice is they, they generally don't lay off a ton of people. They, they will sometimes end outsource contracts, but a lot, in a lot of the case, a lot of cases, they don't lay people off.

They change people's responsibilities. They ask them to do less of one thing and a whole lot more of something else. And then in some cases, that means they don't have to do much hiring right now, but they think that a layoff would be pretty demoralizing. So they sort of grow into the new cost structure that they can support. And then in other cases, there are companies where they realize, wait, we can ship features twice as fast now. And so our revenue is going up faster. So we actually need more developers because our developers are so much more productive. We'll be back in a minute with the lightning round.

I was joking with my producer Jacob the other day, who's one of Pushkin's most valuable employees. I hired him to be my assistant years ago in the most random manner possible. I think he saw a message board posting somewhere and I interviewed him for basically 10 minutes and said, go for it. I made a wild gamble on someone and got incredibly lucky.

But let's be honest, you can't rely on getting lucky when it comes to hiring people. Lightning's not going to strike more than once. You need a system and you need tools. And that's why LinkedIn is so important. LinkedIn is more than just a job board. They help connect you with professionals you can't find anywhere else. Even people who aren't actively looking for a new job.

In a given month, over 70% of LinkedIn users don't visit other leading job sites. So if you're not looking on LinkedIn, you're looking in the wrong place. Hire professionals like a professional and post your job for free at linkedin.com slash gladwell. That's linkedin.com slash gladwell to post your job for free. Terms and conditions apply. Okay, let's finish with the lightning round. Um...

So the most interesting thing you learned from an earnings call transcript in the last year? Most interesting thing from a transcript in the last year, I would say there was a point

This might have been a little over a year ago. There was a point at which Satya Nadella was talking about Microsoft's AI spending. And he said, we are still at the point, and I think he and Zuckerberg both said something to the same effect and in the same quarter, which was very exciting for NVIDIA people. But it was like, we're at the point where we see a lot more risk to underspending than to overspending on AI specifically. Yeah.

That really speaks to your book, right? That really is like bubbly as hell in the context of your book, like overspending, like the Apollo missions, like the Manhattan Project, like the big risk is that we don't spend enough.

And also they know that their competitors are listening to these calls too. So they were also saying that this is kind of a winnable fight, that they do think that there is a level of capital spending at which Microsoft can win simply because they took it more seriously than everybody else. So he's like, we're, we're, yes, we're going to spend billions and billions of dollars on AI because we think we can win Zuckerberg, uh, uh, implicitly. Um, yeah.

What's one innovation in history that you wish didn't happen? I wish there were some reason that it was infeasible to have really, really tight feedback loops for consumer-facing apps, particularly games. Is that a way of saying you wish games were less addictive?

Yeah, I wish games were less addictive or that they weren't as good at getting more addictive. So I wrote a piece in the newsletter about this recently because there was that wonderful article on the loneliness economy in The Atlantic a couple weeks back that was talking about how one of the pandemic trends that has mean reverted the least is how much time people spend alone. And I think one of the reasons for that is that all the things you do alone don't

they are things that produce data for the company that monetizes the time that you spend alone. And so the fact that we all watched a whole lot of Netflix in the spring of 2020 means that Netflix has a lot more data on what our preferences are. So they got better at making us want to watch Netflix and all the video games we played on our phones got better at making us addicted to keep playing video games on our phones. Yeah, that's a bummer. It's a bummer. Yeah.

What was the best thing about dropping out of college and moving to New York City at age 18? So I would say that it really meant that I could and had to just take full responsibility for outcomes and that I get to take a lot more credit for what I've done since then, but also get a lot more blame where there isn't really a brand name to fall back on.

And so if someone hires me, they can't say this person got a degree from institution X. Um, you know, I, I didn't even, I dropped out of a really bad school too. So, um, there's, there's not even like the, not even the, the extra upside of, uh, you know, if I start up with so great, I just had to leave Stanford after only a couple of semesters. No, it was, uh, it was Arizona state and I, I didn't even party. So, yeah.

Yeah. But yeah, it's that. It's just being a little more in control of the narrative and also just knowing that it's a lot more up to me. What was the worst thing about dropping out of college and moving to New York at 18?

Um, so one time I went through a really, really long interview process for a job that I really wanted. And, um, at the end of many, many rounds of interviews and, you know, work session and lots of stuff, um, they, the hiring committee rejected because I didn't have a degree and that was on my resume. So that was kind of inconvenient. Um, I guess another, another downside, like it might've been nice to spend more time with fewer obligations and access to a really good library.

Bern Hobart is the co-author of Boom! Bubbles and the End of Stagnation. Today's show was produced by Gabriel Hunter Chang. It was edited by Lydia Jean Cott and engineered by Sarah Brubier. You can email us at problem at pushkin.fm. I'm Jacob Goldstein, and we'll be back next week with another episode of What's Your Problem?

You just realized your business needed to hire someone yesterday. How can you find amazing candidates fast? Easy, just use Indeed. Stop struggling to get your job posts seen on other job sites. With Indeed Sponsored Jobs, your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster. According to Indeed data, sponsored jobs posted directly on Indeed have 45% more applications than non-sponsored jobs.

Don't wait any longer. Speed up your hiring right now with Indeed. And listeners of this show will get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash P-O-D-K-A-T-Z 13. Just go to Indeed.com slash P-O-D-K-A-T-Z 13 right now and support our show by saying you heard about Indeed on this podcast. Terms and conditions apply. Hiring? Indeed is all you need.