Look at AI right now. Billions of dollars being spent. The flurry of all of this benefits us as consumers, always and everywhere. Just wait, and they'll compete and compete, and it will accrue to us as users. How do we create an unfair advantage in a world of AI? I think the limits to human intelligence are rooted in our biology. AI is, over time,
will understand us in many ways better than we understand ourselves. We can still beat computers and chess. Done. We can still beat them in Go. Done. We can beat them in video games. Done. Okay, but we still have creativity. Done. All these things have been trained on the sum total of all human creation. And now they're being trained on the sum total of human creation plus artificial creation. I'm absolutely convinced that
We are going to have machines doing science 24-7. I just think it's going to be part of the total overture of creation, and I think it's a beautiful thing. Welcome to The Knowledge Project. I'm your host, Shane Parrish. In a world where knowledge is power, this podcast is your toolkit for mastering the best of what other people have already figured out.
If you want to take your learning to the next level, consider joining our membership program at fs.blog.com. As a member, you'll get my personal reflections at the end of every episode, early access to episodes, no ads, including this, exclusive content, hand-edited transcripts, and so much more. Check out the link in the show notes for more.
While others ask what's trending, Josh Wolff asks what seems impossible today? He's built a career betting on scientific breakthroughs that most people don't believe can happen. As co-founder of Lux Capital, he's backed companies cleaning up nuclear waste and building brain-computer interfaces. But here's the contradiction: despite investing in technology that could make humans obsolete, Josh is profoundly optimistic about human potential.
His thinking challenges conventional wisdom. While most see AI as automation and threats to humanity, Josh sees it as a catalyst for human achievement.
In this conversation, we explore this paradox, diving deep into how technological evolution can amplify rather than diminish what makes us human. From geopolitical power shifts to the future of human creativity, Josh reveals the exact frameworks he uses for seeing what others miss and betting on this seemingly impossible future. It's time to listen and learn.
Start with what you're obsessed with today. What's on your mind? Well, first and foremost, kids and family. Trying to be a good dad, good husband. Technologically obsessed with so many different things. We were just in a partnership meeting, and probably the most interesting thing at the moment is thinking about the speed of certain technologies, like the actual physical technologies, and where the bottlenecks are. So in biology, you've got all
All kinds of reactions, nature figured out, evolution figured out, enzymes and catalysts and things that can speed up reactions. But you can't move faster than the speed of biology. Now you think about AI, totally different field, but the same sort of underlying philosophical principle. If you've tried Chachi PT Operator,
it can only move at the speed of the web. And at that, it's sort of with latency a little bit slow. So we're thinking about what are the technologies that can accelerate these things that have these natural, almost physics limits, and even if those limits are biological or digital,
So that is something that at the moment I'm obsessed with in part because I have ignorance about it. And when I have ignorance about it, my competitive spirit says, how do I get smart about this? How do I get some differentiated insight? How do I know what everybody else thinks and then find the sort of white space answer? So that's one big thing. Obsessed with geopolitics. It is the great game.
Everything from US-China competition to Iran-Israel access of religious conflict, the Sahel and Maghreb in Africa, which is not an area that a lot of people talk about or think about, that I believe with low probability, super high magnitude import is going to become the next Afghanistan, that that region in the Sahel is
all these coups and cascades of these coups of failed states where you have Russian mercenaries, violent extremists, Chinese CCP coming in with infrastructure and influence, European colonial powers being kicked out.
Against this backdrop of brilliant scientists and engineers and technologists that went to HBS or Stanford and worked at Meta and Google and are going back and particularly in like Ghana and Kenya and Nigeria are building businesses. So that continent is going to be a truly contested space for future and progress and for utter chaos and terrorism.
So, yeah, it's a widespread of stuff that I'm probably currently obsessed with. Let's talk about all those things. Let's start with sort of diving in. You mentioned chat GPT operator. Yes. And the limitation sort of being we're moving at the speed of the web. At what point do you think that we're – those systems are all designed for humans. At what point do they become designed for AI first and then humans are using them? Or do we just have two simultaneous interfaces? I think it's going to be both. I think that there's always this like ignorance arbitrage.
where somebody figures out that there's an opportunity to take advantage and improve a system while people don't understand it. And so I think that there are people that are probably gonna launch redesign businesses where they say we will optimize your web pages just like people did for search engine optimization when Google rose, that you had to be more relevant because Google was so important. Google was influencing whether or not people would see you or discover you. And so if there are certain tasks like open table or restaurants or shopping,
They are now going to start to shift their user interface, not just for the clicks of a human, but for the clicks of an AI that's doing research on the best product. And the way that those are going to negotiate and in some cases influence or trick the judgment and the reasoning of the AI is really interesting.
So I think that that is probably the next domain where those things are going to get better, faster. They're going to have more compute. But then people are going to be redesigning some of these experiences, not for us and the user interface of humans, but machine to machine. And you're already starting to see this where there was one element of like an instantiation of R1 communicating with another in R1. But the language that they were communicating was not English or Chinese or even traditional code. It was like this weird, almost alien language. And so I think you can see a bunch of that.
There's another adjacent theme, which I think is also really interesting, which is AI portends the death of APIs.
So APIs allow Meta with their MetaGlasses, Orion to be able to communicate with Uber and Spotify through the backend, through software. But increasingly, those things are complicated. They're hard to negotiate. There's a lot of legal, there's API calls, there's restrictions, there's privacy, there's controls. But if you're one of these companies that are like, I don't want to go through all of that. Can I just use AI to pretend that I'm a user? And in
And in fact, I had this experience where I was using Operator and I had this moral dilemma for a split second. Do I click to confirm that I'm not a bot? Because I had to take control over it because the bot actually was trying to do my research on behalf of me. And so you see a world where APIs that have been the plumbing of everything in SaaS and software and negotiating behind the scenes may start to lose influence and power to AIs that are able to basically just negotiate on the front end as though they were a user.
So I think that that whole domain is going to very rapidly evolve in like a quarter or two. You mentioned sort of the limitations, like biology moves at a certain speed. There's a couple subset of questions here, but one is, where's the limitation in AI growth right now? It seems like we have energy as a key input, we have compute as a key input, and then we have
data slash algorithms as like the next key input? What am I missing? What's the limitation on those? Well, start with conventional wisdom, which has heretofore been correct, which is you need more compute, you need more capital, scaling laws for AI, just throw more GPUs and processors and money and ultimately energy to support that, and you will get better and better breakthroughs. The counter to that, which you're seeing in some cases with open source or people that have now trained on some of the large models, is that
there's going to be a shift towards model efficiency.
And so that's number one, that people are going to figure out how do we do these more efficiently with less compute. Number two, which is a big sort of contrary thesis that I have, is that a significant portion of inference, so if you break down training, you still need large 100,000-ish clusters of H100 top performing chips. It's expensive. Only the hyperscalers here before have been able to do that. You can do some training if you're using things like Together Compute and some of our companies without having to do that yourself, sort of like going back to
on-prem versus colo versus cloud transition 15 years ago but i think that you're going to end up doing a lot of inference on device meaning instead of going uh to the cloud and typing a query like 30 to 50 percent of your inference may be on an apple or an android device and if i had to bet today it's android because of the architecture over apple but if apple can do some smart things maybe they can catch up to this but some of the design choices and the closed aspects which have
been great for privacy as a feature may hurt them in this wave. And you could already see like perplexity can actually be an assistant on my Android device. I carry both so I can understand both operating systems. You can't do that yet on iOS and Apple. But here's the insight. If 30 to 50% of my inference is my cached emails and my chats and my WhatsApp and all the stuff that I keep on my device, my photos, my health data, my calendar, then the memory players may play a much bigger role.
So now you have Samsung, SK Hynix, Micron that are important here. If I had to come up with somewhat pejorative analogy, I think Samsung is going to be more like Intel. I think that they're just a little bit sclerotic and bureaucratic and slow moving and it's going to sort of ebb and decline. I think Micron
is a U.S. company which is going to be more constrained by restrictions that are put on export control. And I think SK being a Korean company is going to be able to skirt those in the same way that NVIDIA has with, you know, distribution to China through Singapore and Indonesia and Malaysia, which is now being investigated. But the memory players are likely to be ascendant. And so what heretofore was a bottleneck on compute could shift dramatically.
attention, talent, money into new architectures where memory plays a key role on small models on device for inference. This episode is brought to you by Bank of America. What if your business could see beyond what is and into what can be? And what if you had a partner as visionary as you are?
Bank of America gives customers access to trusted experts, real-time insights, and digital tools to make every move matter. What would you like the power to do? Visit bankofamerica.com slash banking for business. Bank of America is proud to be the official bank sponsor of FIFA World Cup 2026. This episode is brought to you by Shopify.
Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.
I've always thought of memory as like a commodity. It doesn't matter if I have a MU chip or a Scandesk or Samsung. And that's what people thought about CPUs back in the day and then GPUs for just, you know, traditional video graphics. And then the AI researchers came and said, well, wait a second, we can run these convolutional neural nets on these GPUs. And they reached into somebody else's domain of memory.
PlayStation and Xbox and pulled them in and suddenly, you know, it lit up this phenomenon that turned NVIDIA from 15 billion to, you know, two and a half or three trillion. Do you think NVIDIA has got a long runway?
I think that their ability with that market capitalization and that cash and the margin that they have, they can reinvent and buy a lot of things. So I think Jensen is a thoughtful capital allocator. I think he's benefited and caught lightning in a bottle over the past 10 years and now just particularly in the past six. I would not count NVIDIA out. Now, what's the upside from $2.5 or $3 trillion to $5 trillion or $10 trillion or do they go back down to – I have no idea.
So that's more of a fundamental valuation based on the speculation. But if you have 90, 95% margins and your chips are $30,000, could you shrink them down and sell for $3,000 and take small margins but get more volume? And this was some of the debate that I think happened with the release of DeepSeek where you had Satya basically talking about Jevin's paradox that any one thing might get more efficient, but the result is not that you have less demand. In aggregate, you have much more demand.
The classic example of this is refrigerators. A single refrigerator back in the day was
an energy hog. All of a sudden, you make these things more efficient. And what happens? It becomes much cheaper. So if something's cheaper, you're going to buy more of it. And they shrunk refrigerators down. And so now everybody had one in their garage and in their office and in their basement. And so the aggregate demand for electricity and then for all the components and coils and refrigerant went up, not down. Same thing with bandwidth. If you have
56k baud modem, which was like my first modem, you know, dial up internet and all that kind of stuff on CompuServe. And, you know, then you go to a T3 and fiber optic, you know, at the speed of light, it is way more efficient, but your usage now is just huge. You're streaming 4k videos and watching Netflix and trading on Bloomberg. Whereas if you actually want to decrease use,
The non-obvious thing that you would want to do is actually slow down speed. I mean, put me on a 56K baud modem today, I'd just like pull out my hair and never use it. You couldn't even use Gmail. Right, exactly. Where do you think intelligence is the limiting factor of progress? Human intelligence. I think that the great thing about us is that we are, I don't know, 60 or 70% predictable in that so many of the...
foibles and virtues and vices of man and woman are Shakespearean. You know, they're hundreds of, I mean, tens of thousands of years old for modern evolution. But with that, we are still irrational. You know, Danny Kahneman was a friend. Danny passed last year, just an amazing guy, but he could document all the heuristics and all the biases, which you study and write about. And he's like, I'm still...
a victim of them. Even knowing them doesn't really insulate me from falling to them. It's just like an optical illusion. You can know it's an optical illusion, but you still see it and you still fall for it. So I think the limits to human intelligence are rooted in our biology. And we have all of these embodied intelligence that have sort of been externalized in calculators and in computers that help us to overcome that.
And I certainly feel today that a significant portion of my day that might be spent with a colleague riffing on something, and sometimes that's like great for a muse or tapping into information or intelligence or some tidbit or a piece of gossip or an experience that they had. And that's why like the diversity of cognitive diversity is really important. I'm complementing that with
All day chats with Perplexity and Claude and OpenAI and, you know, any number of LLMs that might hallucinate just like a friend and might be wrong about something, but might give me some profound insight. I think what I'm interested in is at what point will machines, like if intelligence is the limiting factor on progress in certain domains or areas, right?
It strikes me that in the near future, the machines will be able to surpass human intelligence. For sure. And if that's the case, then those areas are rife for either disruption or rapid progress. Well, I think, you know, what Peter Drucker never imagined when he was talking about knowledge workers and the shift from like blue-collar workers to white-collar workers was that machines would actually most threaten those professions.
And so you take some of the most elite professions, doctors, the ability to do a differential diagnosis. Today, I take my medical records as soon as I get them from top doctors and I still put them into LLMs to see what did they miss. And sometimes it unearths some interesting correlations. Is there a scientific paper from the past 10 years that might have bearing on this particular finding or something in a blood test? So that is really interesting. Lawyers, language is code.
Multi-billion dollar lawsuits sometimes come down to the single placement and interpretation by a human of a word. One of the things that Danny Kahneman recognized, and he published this in his last book, Noise, was that the same case, the same information, the same facts presented to the same judge at different times of day.
or to present it to different judges. They are not objective. And he actually thought that for justice and fairness, that you would want the human intelligence applied to these situations with their biases to be either complimented or replaced by AIs that had a consistency and a fidelity in how they made decisions. So those are all high paying jobs with lots of training, lots of time to gain the experience, the reasoning, the intelligence, and many of those things are at risk.
government itself, the ability to legislate, make decisions. You want to be able to capture and express the will of the people. But increasingly we have social media that's able to do that. It could be corrupted, but there's mechanisms to figure out how do you really surface what does the populace care about? And some in Europe have tried to do these things. The key thing is always like, what's the incentive and what's the vector where somebody can come in and corrupt these things?
But interestingly, I actually think that the people whose jobs are like most protected in this new domain are blue collar workers.
A robot today can't really fully serve a meal, and they cannot effectively, even though every humanoid robot tries to do folding laundry. And there's still basic jobs that are not low-paying, but they're arguably safer than ever. And this ties into immigration and technology and human replacement. But people that are doing maintenance, people that are plumbers, many of these things are standardized systems.
But it's like the old joke about the plumber that comes in and he comes in and he taps a few things and suddenly the pipes are fixed. He says, how much is it? It's a thousand bucks. A thousand bucks for that? He's like, it was a $5 part. He's like, yeah, the part was $5, but 995 was knowing where to tap and where to put it. And so I think that there's still a lot of this tacit knowledge and
craft and maintenance that is going to be protected against the rise of the machines that are going to replace most of the white collar intelligence and knowledge workers. Do you believe, I think it was Zuck who came out and said, you know, by mid this year, 2025, that AI would be as good as a mid-level engineer? Encoding. Encoding. Yeah, for sure. What are the implications of that? Walk me through like the next 18 months, if that's true.
Well, again, if you take this in the frame of Jevons Paradox, then a lot more people are going to be able to code in ways that they never have. And in fact, I think it was like Andrej Kaparthy, who was on Twitter a day or two ago talking about who he himself as a coder was basically just talking in natural language and having the AI, and I forget which one he was particularly using, generate code. And then if it...
If it tweaked something and he wanted to change a design and make a particular sidebar a little bit thicker or thinner, he would just say, make the sidebar, and it was able to do that. So I think the accessibility for people who never coded, never programmed, to be able to come up with an idea and say, oh, I wish there was an application that could do X, Y, and Z, to be able to quickly do that is great. For the big companies who employ many coders that are competing at an ever faster speed,
You know, you have somebody like Mark Benioff who's saying that they're not hiring any more coders at the same time that he's still talking about the primacy of SaaS, which is this weird contradiction. But I would suspect that maybe you lose 10 to 30% of the people that you normally would have hired, but the people that are there are still like these 10x coders and now they have a machine that's helping them be like 20 to 100x. Yeah.
Do you think margins go up then for a lot of these companies? I don't know. I always feel like margins are always fleeting in a sense because it's like a fallacy of composition. One company stands up a little bit higher and then everybody else is on their tippy toes. So I think it just changes the game, but I don't think that you have some like permanent margin. The only time you get really large margins is when you truly have like a monopoly. Like NVIDIA today, until there's an alternative, whether in architecture or algorithms or in something else, you know, they've got dominant margins because they can charge money.
super high prices because there is no alternative. So when you have that, there is no alternative, but in many domains, given enough time, there's an alternative and then margins just resettle. And look at cars, you know, the average margin on cars, cars today are
10,000 times better by every measure of fuel efficiency, comfort, air conditioning, satellite radio. But those margins never persisted as being like permanently high. I always come back to sort of Buffett in the 60s with the loom, right? I always relate everything to the loom because everybody was coming to him when he first took control. And they're like, oh, we get this new technology. And he's like, yeah, but all the benefits are going to go to the customer. It's not going to go to me. I mean, look at AI right now. Billions of dollars being spent.
all the foundation models, all the competition. The second that DeepSeat comes out, it suddenly accelerated the internal strategic decisions from OpenAI of when are we going to release models. And so the flurry of all of this benefits us as consumers, always and everywhere. And so we were looking at internally installing some new AI system
to surface all of our disparate documents. And there's a bunch of these. And our Gmail, our Slack, our Google Docs, our PDFs, our legal agreements, and just have a repository with permissions and all this. And it's expensive, in part because going back to that ignorance arbitrage, somebody could charge us a lot of money to do that implementation.
And my default was just wait. Why don't we wait six months? Because this is going to be available from all of the major LLM providers today that want to get the enterprise accounts. And let's just wait and they'll compete and compete and it will accrue to us as users.
Talk to me about all these models. People are spending hundreds of billions, if not trillions of dollars around the globe competing on a model. Do you think that's the basis of competition or how does that play out? And then you have Zuck who's trying to open source it and he spent, I don't know what, 60 to 100 billion probably by the end of June this year open sourcing it. So he's basically like, I wouldn't say he's doing it for free, but what's his strategy there?
First, you have just straight head-to-head competition, you know, Anthropic and Claude and Chachapiti and Opening Eye and others. Then you have sovereign models. So there are countries that are saying we don't want to be beholden to the U.S. or to China. We funded a company in Japan called Sakana. This is one of the lead authors from the Google Transformer, a guy, David Ha, and just incredible team of...
And they are actually trying to do these super efficient novel architecture. So they're not trying to train these multi hundred thousand clusters. Their latest model, which was based on this evolutionary technique, was like eight GPUs, which was wild. So that's one trend. But on the strategic question for Zuck, I actually think that he's probably playing at the smartest of everybody, which is, and he's been open about this. We're going to open source the models with Lama and we're going to let people develop on them. Why? Because
The real value is going to be in the repository of data, longitudinal data, deep data. If you go back 10, 15 years, like the number one thing in tech was big data, big data, big data. Okay, well now if you actually have big data, you want to use whatever models are out there to run on your proprietary silo of data. So the people that I think are going to be advantaged, meta, why? They've got all my WhatsApp messages. Apple doesn't. Meta does.
They've got all my Instagram likes and preferences and every detail of how long I spend and linger on something and what I post and all of that content, my Facebook, which I don't really use anymore other than when Instagram cross post to it. But that is super valuable. And they care about that in part because Zuck needs to route around both Apple and Google. He does not have a device. I mean, you've got Oculus and MetaQuest and whatnot, but that's not the one.
This Orion with the neural band from the company Control Labs that we funded, which was for the non-invasive brain machine interface to be able to use free gestures, which is an absolute directional arrow of progress, right? Disintermediating the control surfaces you have, remote controls and all that kind of stuff, and just being able to gesture, map a device to your human body is absolutely the trend. But he's thinking about how do I route around these devices and how do I have a long repository of every
of everybody's information and use the best model that's out there. And the great thing about open source is it'll continue to improve over time. So I think that that's a winning strategy. I think the people that are continuing to develop ever better models, unless they have proprietary data, are going to be sort of screwed. Bloomberg should do really well.
I mean, the huge amount of proprietary information, all the acquisitions that they've done over time, being able to normalize merger data and historic information and the longitude of price information and correlations between different asset classes, being able to run AI on top of that is like a quant stream.
So I think that people that have hospital systems, arguably some governments have used efficiently, but anybody that has a proprietary source of information, clinical trials, failed experiments inside of pharma companies, being able to do that is the real gold. And the large language models are effectively like over time, I think, going to trend towards zero in a commodity excavator of that value.
So the moat is really in the data, you think? I think so. Because everything will be sort of comparable running on top of it. But the data sitting by itself is like an oil well that isn't mined, you know, or not gas-fired, that isn't fracked. So it needs to be extracted. And I think that
Most likely open source, but in some cases, enterprise partnerships between Anthropic or OpenAI with some of these siloed data sets will unleash a lot of value. So aside from meta, what counterintuitive sort of public companies would you say have like really interesting data sources? That's a good question. I haven't really spent a lot of time on that to figure out who's got crazy amounts of proprietary data available.
Pharma would be a good one because obviously they're tracking both their successful but their unsuccessful clinical trials. There's a lot of information in the unsuccessful data, like the things that failed that you can learn from. You could argue that Tesla, of course, who I'm very publicly critical of, if they truly are collecting a ton of data
road user data from every Tesla that's being driven, that would be valuable. Anything where there's a collection, a set of sensors, a repository of information that is owned by them. Anything that we've signed off on that your data is...
free for us to use like like meta you think of tesla they should have the best mapping software in the world they literally like drive millions of miles every day right they can update everything they can locate police they can locate speed cameras they can get real-time weather patterns yeah totally yeah but but okay the flip side of that though right taking sort of like the opposite view for a moment netflix
Netflix has all of our viewing data. They know what we like. They know what you like. They can make a perfect set of channels for you. And the recommendations are reasonably good approximations of adjacencies to things that you liked, but they haven't been successful, nor has the human algorithm at, say, HBO in the past of perfectly creating the next show that you really want to see. And what's interesting about that is they've put a lot of money into this, but it hasn't yielded
the recipe maker for like the next perfect show. And oftentimes the thing that you want to see is almost something that's orthogonal from what you've been watching. Like I heard Anthony Mackie, who's in, you know, the latest Marvel movie talking about an expectation that he'll be, somebody was like, how long do you think you'll be doing Marvel movies? And he's like, I think probably like the next 10 years. And I'm like, probably two or three, because people are just bored of this stuff after a while. Like nobody wants to
See another, I don't want to see another Marvel movie. I like the adjacency of like the boys. Yeah. Which was like the dark, you know, superhero kind of movie. And I think trying to find groups that have proprietary data that have some predictive value, the most value probably for society, I don't know if it'll be entirely captured by companies, is just all the scientific information that we have.
Because I'm absolutely convinced that we are going to have machines doing science 24-7. Well, so talk to me a little bit about it. I want to come back to Tesla in a sec, but let's go down the science. Why has nobody sort of taken every study published in a domain, say...
Alzheimer's research popped it into GPT and be like, where are we wrong? What studies have been fabricated or proven not true that we're investing research in, right? Like, cause studies get built on studies and studies. And so if something from the eighties came out and it's like completely false, we've probably spent $20 billion down this rabbit hole, right?
And what's the next most likely thing to work? Is anybody doing that? I have to imagine they are because deep research came out, you know, today or in the past 24 hours from OpenAI, which is sort of their model with a better engine, so to speak, than Google's deep research, which itself was impressive, both because of its ability to search many sources and then the ability to, you know, sort of, I think it was either there or through Nopigelum to conjure the podcast, which at
At first was a static presentation of near human quality voice, but now you can interrupt it like a radio call, which is super cool. But you can say, go through the past 15 years of PNAS papers or science and nature papers around this particular topic and find correlations between papers that do not cite each other.
or tell me any spurious correlations. And the beauty of all of that on sort of the information or informatic side is eventually you will have a materials and methods output of that, that you can feed into something like BenchLink or some of the automated lab players to actually say like run the experiment. So I'm absolutely convinced, like high certitude. I don't know exactly which company will do it. We've invested in some, they haven't worked.
We'll invest in more. Hopefully they will. But this directional hour of progress of the idea, by analogy, of machines doing science 24-7 automated is going to happen. I'll give you one or two analogies. If you were a musician back in the day, if you and I were starting a band, we would have to go and get studio time here in New York City or Electric Lady Land or whatever. You bring your instruments. Okay, maybe you could rent the instruments there. And
Then GarageBand and Logic and Pro Tools pops up, and now we don't have to be in the same physical space. My instrument is virtualized. I can create a temporal sequence of notes. I can layer them in. You could play drums. You could do vocals, blah, blah, blah. Science is the same thing. I can be on the beach in the Bahamas and conjure a hypothesis and use one of the AIs to test the hypothesis and look at past literature searches.
See freedom to operate. See if there's white space. And then tell one of these cloud labs that is literally like sending something to AWS back in the day and say, run this experiment. And the beauty of this is the robot will do the, well, here's the beauty, the virtue and the vice. The robot will do the experiment and it should do it perfectly because it's digital and it's
High fidelity. The vice is so much scientific breakthrough has often happened because of serendipitous screw-ups. And so you want almost to engineer like a temperature on an AI model, a little bit of stochastic randomness so that the machine can sort of screw it up to see what might happen. Because, you know, penicillin and Viagra and rubber and vulcanization, all these things happen by like random processes. And then post-factor, we're like, huh, that's funny. And then, you know, you run with it. But then the machine will say, here's the results.
And it will then reverse prompt you and say, do you want to run the experiment again, but changing the titration of this to 10 milliliters instead of five? And you just click a button from the beach and you're like, yes, and the robots run it. Whoever ends up creating and building that
i think is going to make a fortune well you don't even have to decide the robots could decide yes right right and you're sort of out of the loop and it just outputs science totally that would be so interesting before we get to that point we'll probably get to the point where models make themselves better is that the the point where it really starts to go like parabolic almost i don't i don't know i i definitely see that models can improve because you can even argue
like DeepSeq's R1 is a model that was improving upon outputs from ChatGPT and so on. So-
I definitely think that there will be this recursive improvement, but you're still going back to being rate limited by time and biological or chemical reactions. You still need to instantiate this into a physical experiment. And so you can model and simulate all you want, but then you actually have to like do the thing and make the compound. And so those still take steps and organic chemistry and there's like 20 reaction steps and people optimize to like reduce them down to six.
And you still need the physical reagents and the right temperature and the experimental design. So I still think that that's going to be the bottleneck. But for sure, like the ideation and experimental design is going to, that's just going to absolutely explode.
And then you'll have these automated labs with lots of different instruments where robots will be able to take out a sample from a centrifuge and put it into the next thing. And like, you don't need humans to do that any more than you need humans to assemble sophisticated iPhones. I mean, we still have very cheap labor and Foxconn factories in China and Vietnam and elsewhere now doing that. But there's no reason for that.
over time. Isn't that low-hanging fruit though? Just like look at all the work that we've done so far and tell us where we're on the right track and where we're sort of like we're going astray. Totally. Like there's nothing preventing that from happening today. I mean it's very like David Deutchian it's like if it obeys the laws of physics it should be possible. There's nothing about this that is like
Totally speculative and fantastical that it doesn't obey the laws of physics. The other project that I wanted to, I was thinking about sort of doing is just calling like a prior art.org or something and having AI read through all the patents and make the next adjacent sort of like patent, like improvement, and then just publish it because then there's prior art. Right. Well, yeah, there will for sure be AI patent trolls if you put it negatively. Yeah.
This would dissuade that. I mean, in a sense, it would sort of be making prior art for as much as you can 24-7. There are companies that do this where they have creative patent filers for continuations in part so that they can keep sort of the life of this going. But
The rise of agentic AI, you can have some crazy idea, some brain fart, you know, 9.30, 10 o'clock at night, and you just had, you know, cocktail with friends. You're like, oh, like imagine, you know, if that exists. Well, do the research. Does this exist? And if it doesn't exist, can you, you know, write a patent and sketch a diagram for me and file it and start and incorporate a company? Now, all those things have to go at the speed of like certain processes, but
All of that could be done overnight where you literally wake up and there are multiple agents working on your behalf that have filed a patent, created a design, incorporated a company, possibly even put it out to some group and raised money for it overnight that have opted into it. And I don't know if there was like successful enough and it hit a bunch of criteria. I might allocate some capital.
into an account to allow a robot AI to actually receive pitches, respond to it, and create a small portfolio to allocate as an experiment. So you can see this whole thing as just like a human idea or maybe one inspired by interactions with an AI that by the time you wake up, you have a company started and the basis for people to actually do work. Like that
In 10 or 20 years, people look back and be like, how did we not see that coming? And look at all the jobs that are being created because every single person is now creating and has like six virtual companies. The future is going to be wild. Yeah. Talk to me. Let's go back to Tesla for a second. Why the hate on Tesla? Let me say, I think Elon is amazing at certain things. Elon is arguably the greatest storyteller, fundraiser, inspiration for companies.
anybody in the past, maybe in all time, truly. I think his relationship with the truth has been questionable. And so in Tesla particularly, I think there was a time where the short sellers started to identify things, not because they just hated the company or hated the future or any of this. And he was able to very shrewdly weaponize the us versus them. They're trying to kill us, right? Most short sellers that I know happen to be very disaffected people and they have a chip on their shoulder. And to me,
The motivating force and the incentive for them is not that they just want to make money, but they want to be right. And they want to be right because they've identified somebody that they think is intentionally doing wrong. It's the same thing as an investigative journalist. It's the same thing as a opposition research for a politician. It's the same thing for somebody that is trying to debunk a Sunday preacher charlatan that basically is almost intellectually competitive to say,
You are trying to pull the wool over these people's eyes, and I know what you're doing, and I'm going to call you out on it. And so for me with Tesla, I think that they got away with accounting fraud on warranty reserves and a whole bunch of other things. I think there was a lot of prestidigitation and magic of look over here while we're doing this. And today it doesn't matter because they got away with it.
But I think that there was not the same kind of honesty that I would ascribe to Jeff Bezos in how he built Amazon and raised a few hundred million dollars of equity. You could look at stock-based compensation, you could look at debt as capitalization, but created this monster that is profitable, cashflow positive, and never raised another dollar of equity. And Elon raised north of $50 billion, took out $50 billion.
treated it like an ATM, said I'm never selling a share and sold lots of shares. And then whether he had to do it to buy Twitter or whatever, I just, I don't feel it was done as honestly as other entrepreneurs that I greatly admire. Now that said,
SpaceX, I have no issue with. I think SpaceX is an extraordinary company. I think it's an incredibly important American company. I think without it, we would be at a massive disadvantage. I think it is truly a national treasure run by Gwynne Shotwell, incredible engineers. We've backed a bunch of these engineers that have come out from Tom Mueller to, that I just, I think the world of. I've just, yeah, been much more critical about Elon's relationship with the truth as it came to Tesla. And in many ways, I felt like the whole thing was unnecessary. Do you
Do you think those are one-time things or they're systemic and they crop up every few months or something? In his personality? Yeah. He's past the stratosphere now. Like it's, you know, he's proximate to power in ways that people can't compete with. If you're an investor in or you're Sam Altman in open AI, you're not only worried about competition, you're worrying about a personal grudge from somebody who has the ear of the president. Yeah.
that can weaponize all kinds of systems of power from the DOJ to the FTC to the FBI. And I would be very nervous being an adversary with that kind of power. Altman came out and said he didn't think Elon would use that power against him. Which is a nice and smart thing and a necessary thing to say publicly. And I think that Elon has even said, like, I won't use that. But
What's the saying? Power corrupts? Yeah, and absolute power corrupts absolutely. But I don't know, like you're in a position of power at Doge and OMB and Office of Personnel. And you have influence and you can shut some of these things down. Does Elon love the SEC? You know, he's been pretty vocal about that institution.
Does he love the National Highway Transit Safety? So if these things are gutted, you know, I think you've got more free reign to shut down criticism. You know, you've got similarly the best entrepreneurs when short sellers are like, you know, saying something, they don't want to ban short selling. They don't want short sellers to be arrested. They just prove them wrong.
My favorite story about that was Brad Jacobs and the short report came out, the stop draw, precipitously he borrowed $2 billion, bought back shares.
Yeah, this is big skin in the game. Right, right. Totally. We're going to double down and go forward. Yeah. I think he turned that two into 10 or eight or something. Like it was just crazy. This episode is brought to you by Indeed. When your computer breaks, you don't wait for it to magically start working again. You fix the problem. So why wait to hire the people your company desperately needs? Use Indeed's sponsored jobs to hire top talent fast. And even better, you only pay for results.
There's no need to wait. Speed up your hiring with a $75 sponsored job credit at Indeed.com slash podcast. Terms and conditions apply. So for me, I don't know, 13 or so, like after I was barbed, I became atheist. And I just wouldn't, I would see like these preachers exploiting people. It just like irked me and it irked me in this. It wasn't in some, I reflected on this over the years, it wasn't in some virtuous people.
holier than thou kind of thing, it was intellectually competitive. It's like, I see what you're doing. You're running a con and I want to call it out. And it wasn't rooted in like self-virtue of like pursuit of truth. The real thing when I like thought about it is, no, I want to show that you're cheating people. Short sellers are necessary to a well-functioning market, right? We need to hear both sides of the story and make our own judgments and decisions. What point do you think that computers are going to really make most of the investing decisions?
Well, you could argue today they are, not because they're doing reasoning and analysis and fundamental work, but because the structure of the market is so dominated by passive indexation. And that is effectively an algorithm. And that algorithm says $1 in buy, $1 out sell, and in both cases, indiscriminately. And so-
You just have a flood of money that goes into the market and these indices buy everything and it becomes this massive market cap weighted accelerant. And then people say sell and then the money just comes out. So the past, I don't know, 10 plus years where this has really become the case with Fidelity and BlackRock and State Street and others that the ETFs, which were well-intentioned, you know, you go and listen to Buffett back in the day, it's like, just put it in the market, right? It's hard for active managers to out-compete.
Definitely the case for the past 10 or 15 years. But I do think that we will see a return to active managers that are able to discriminate true fundamentals, in part because I think that the cost of capital is just going to rise and all the funny money of the past 10 years is going to wash out.
Two questions, two rabbit holes I want to go down here. One, at what point do you think active managers and analysts is replaced by AI in the same way that Zuck is saying an engineer at Meta is going to be replaced by AI? Already, there are AIs that can not only go through Qs and Ks, 10Qs and 10Ks, and can listen to quarterly earnings reports and CEOs that are talking at conferences or on podcasts.
and can get an emotional sentiment, can see where they're varying their language in ways that only the subtlest of analysts in the past or portfolio manager could do. And I think the most valuable thing that AI is going to do when you ask it questions and it comes up with the answers and assuming those answers are accurate and cross-correlated and double-checked, they actually say, here's the five questions you didn't ask. And so that is going to unleash real insight
Now, there is still this human aspect of being able to look at somebody and decide, do I trust them or not? And I think that the best analysts are able to say very Buffett-like or Joel Greenblatt-like, is this a good business? And there's ways to measure that, like a fundamentally good business, even if an idiot was running it. And then do I think it's at a good price?
And is therefore my expected return going to be high? And do I trust the people that are running it? Because ultimately, I am allocating capital to them in the same way that somebody allocates capital to us. I like the virtue of our private markets because I am less or still beholden, but far less beholden to a day-to-day market, you know, Mr. Market fluctuation of, you know, manic depressive positivity or pessimism.
We have 10-year locked funds. We're able to make long-term bets. It's arguably this great source of time arbitrage when everybody else is looking and discounting back a year or 18 months or two years. But you think about the three main sources of edge, and we've talked about this in the past, but informational, analytical, and behavioral. Informational advantage used to exist a long time ago. Regulations like Reg FD and avoidance of insider trading and information that tried to equalize the playing field.
in addition to a huge influx of really brilliant people that are able to use cutting edge data tools, having an information advantage is really hard. Having an analytic advantage where AI can play a role in that, let's assume we all have the same information and I don't have any better intel or information. Like for example, just as an aside on the information advantage, there was a hedge fund, which I won't name, which was very cleverly going and actually buying stuff online from Adobe. Every time they did,
They got a piece of legal inside information, which is Adobe's web URL. Actually, when you made a purchase for Creative Cloud would tell you 4,723,000 or whatever. And then like six hours later, they would go and it was like 4,000,000. And so they could infer and extrapolate what the sales were based on this because when they bought it six days later or six hours later or whatever, they saw where they were in the queue and you can sort of extrapolate. That was legal to do.
But once that signal leaks out and somebody's like, oh, that's a clever way to figure that out, then it rapidly erodes. So informational advantage, hard analytical advantage, brilliant people combined with brilliant technology, really hard. And then it goes into this last one, which is the behavioral. And that to me is the persistent thing. Now, AIs over time will understand us in many ways better than we understand ourselves.
Google already arguably knows more about you than the closest people in your life based on the things that you search for, search in private for. And AI will as well. And already there's an eerie moment that I appreciate because I've given myself over to the information gods. There's a required energy to try to maintain privacy and I just feel like it's not worth it. We can talk about privacy because most people basically just want to keep private, you know, their sex life, their bathroom time, and how much money they have. Unless...
you are super rich or super poor. Because if you're super rich, you broadcast. You know, you're on the Forbes 100, 400. You're showing the house you just bought, the art you just bought. You're signaling your wealth. And if you're broke, then you're really poor. There's people that are on Twitter like, I'm dead-ass broke. I have no money. You know, and they literally... It's the people that are in the middle that are middle class but want people to think that they're upper class or people that, you know... And so...
Everything else for privacy, I think, is like out the window. But the reason I was saying this is I've given myself over to the information gods. And when I go to ChatGPT, because it has memory and it's constantly updating it, there are times where it remembers things. And I'm like, how did you know that? And I forgot that it was like from a search three months ago where I mentioned something about my kids and a place that I like to vacation. And part of me actually appreciates that that repository is compounding, but it does sort of scare you because I remember the things that we talked about
And then if you were like, oh, like I heard you went to that, I'd be like, well, who'd you hear that from? But now if I ask the AI, who'd you hear that from? It would be like, you, you told me that. You told me. Wait till it gets on your device, right? Exactly. And then, okay, well, let's go back up this rabbit hole a little bit here, back to passive indexation. So the rise of this is really post 2010, right? The mass rise of passive indexing. We've never seen...
Well, we did during COVID, but there was so much money thrown into the system. What do you think the second, third order effects of this will be, especially in terms of volatility or something unforeseen?
Well, you saw this a little bit with just how quickly the market reacted with a single largest one day loss of five, six, $700 billion within video, just because of the fear over deep seek. And the fear was a cascading, you know, traditional information cascade. Oh my gosh, what does this mean for the expectations we have about demand for compute and, and CapEx and the expenditures that people are going to have? Do we need to rethink this mental model? And
And so I think that there'll be things like that that whipsaw and shock people. Do they become reflexive at some point? Like maybe at 500 million or 500 billion, it didn't, but had that hit 700 or 800 or a trillion, like does it then start the auto selling and the-
It's a good question. I don't know. Obviously, systems where there are significant leverage in the system are ones that are most prone to that, sort of Minsky moments where, you know, things are going fine, going fine, and suddenly they just collapse. And usually that's where you have a lot of leverage in the system, and sometimes it's hidden leverage. I don't know other than some of the 2x or 3x levered ETFs that that's the case. Yeah.
in traditional market. But where does this go from here? I'm not sure on the passive active piece what will break this other than if you were to have widespread news reports of a handful of active managers that are suddenly beating the market decisively and they're pointing to the structure of the market. And so there's a rebalancing where people start to shift out of these things. On the volatility piece, what's interesting is over the past decade,
five, six, seven years, maybe five especially,
A lot of LPs, allocators have gone into private credit, have gone into private equity, in part because they are mechanisms of muting volatility and the vicissitudes of the market because you don't have daily mark to market. And so there's been a little bit of this perverse incentive. But as Buffett says, you know, what the wise do in the beginning, the fool does in the end, and then these things get overdone. And so I'm actually worried about some of those asset classes.
Where private credit in particular, I think, was wise to do a few years ago and now is overdone. You have another phenomenon, which is every major sophisticated large private equity firm, Apollo, KKR, Carlyle, et cetera, are all starting to think about or are actively thinking about both permanent capital in the form of insurance vehicles like Apollo and private.
accessing retail in a huge way that many people see retail being the next wave of this. And when you say retail, you actually just mean normal day-to-day consumers. Individual investors that might've been on Robinhood and could never heretofore access Apollo or Carlisle. But in aggregate, you're talking about trillions of dollars of investor money. And so I think that they're going to be tapped. They're going to be into these vehicles that will present new, interesting,
Financial vehicles because you're gonna have to find ways to give people liquidity for these things and so I've heard about some interesting things actually from a friend Mike Green who I think is a really smart practitioner and Student of markets. He was one of the earliest to this passive-active piece He was one of the earliest to understanding the mechanisms behind the scenes for the SPAC movement he's early now to this idea of
of Uniswap, which has a certain mechanism that provides liquidity by having, let's say like 80 or 90% in treasuries and 10% in some underlying. And you're able to swap out some illiquid thing for effectively some liquid pool up to some point. There's some repricing.
And he's been thinking about something that like Apollo is doing with State Street and that this portends a movement into these almost artificial, like if I would have talked about ETF 20 years ago, people would have been like, don't we have mutual funds already? And they're like, no, but they're going to go super low fee and you'll be able to trade them on a daily basis.
There's something here to watch about the flood of retail money that will go into illiquid alts, private equity in particular, and new vehicles that are formed to be able to provide liquidity because of that. And I think that that's both going to be really interesting and potentially, to your point, creates something that sets up some massive blowup. The other thing that I want to come back to in the rabbit hole here is you mentioned persistent advantage is...
- Yes. - Talk about that in the context of humans and how do we create an unfair advantage in a world of AI for humans? Like what are the ways that we can, like your kids, like how are you teaching them to navigate this in a way that gives them- - An advantage. - An advantage. A behavioral advantage. - So, you know, you go back some years and it was like, okay, we can still beat computers and chess. We can still beat them in Go, done. We can beat them in video games, done. Okay, but we still have creativity, done.
Now, of course, human creativity is not dead, but every day I am doing something creative on AIs that I can't do. I cannot paint. I cannot draw. I cannot conjure. I like taking photographs and I like the composition of that, but I can engineer prompts, which itself is an act of creativity.
and get the most inspiring muses and results. And I can take works of art that I like and put them in and ask it to describe it and do it six times, particularly in like mid journey and recreate from the prompt some alternative of it. There was even an artwork that I loved, which was this
mix mash of superheroes that looked like it was put through a blender and I was going to buy the artist's work and then I couldn't describe it to Lauren my wife and so I put it into Midjourney and described it and then I just clicked a button and I made four versions of it I guess 16 versions because each one was four and it was insane and I was like why am I going to buy this because I just recreated it and I felt morally bad and
Because I wasn't copying, but I had a perfect description of the style. And so I thought that that was pretty wild. So these are tools that I think kids should be using. They should be learning. It's just like a language. And I think they need to be versed with it in part to understand the domains to avoid.
Because they're going to be not void of emotional or aesthetic or moral value, but you're not going to make money. And so my wife and I debate this all the time about dance. Dance is amazing. We started dating and she took me to this dance thing. I was really not into dance at all. At Parsons Theater, it was this guy, David Parsons, at the Joyce. And he has this performance called Caught.
And me being into science and technology, I'm watching a bunch of people dance. I'm not really into it. And all of a sudden, it's a weird ambient sound, very electronic, very Tron-like. And there's a guy in white pants, and that's it. And then a strobe light goes on. Anyway, and the strobe starts flashing. And all you see as a viewer is this person floating in the air.
But behind the scenes, what they're really doing is jumping, jumping, jumping, jumping perfectly time to the choreography of the strobe. So you see them caught, but they're doing this crazy kinetic athletic thing. Right. And I was just like, that's super cool. Right. Everybody should see this. Right. It's just like inspiring. Now, that said, we had this debate afterwards because she's like, you know, these dancers make no money.
And I'm like, well, market forces would say that there's too many dancers or there's not enough demand. And they collect like unemployment insurance for half the year because they have to work other jobs and they don't have this and it's just wild. And so we got into a debate about, you know, what is the societal value of this? Now, I found it valuable and I would go and I would pay money and be a patron. But I wouldn't want my kids to go and pursue that unless it was that they were solving for just the aesthetic issues.
passion that they had. But I think that there's very few crevices where AI will not creep in and either be able to do the thing nearly as good, including
You think about the compression that we all enjoy of a Game of Thrones episode, the compression algorithm of all of that talent, set design, special effects, screenwriting, a lifetime of acting and performing, you know, just swish down. One of our companies, Runway ML, and there's others that you can conjure. Today it's 10 seconds, but tomorrow it'll be two minutes and full feature films with no key grips, no lighting, no costume design, no set design, no actors.
And voice is entirely generated by AI. And so does that strip this art of its soul kind of thing? I don't think so. I think it just creates a new form of art, just like Pixar. You know, for all the people that were doing Disney Mickey animations by hand, and then suddenly we get this 3D rendered graphics and incredible storytelling. That will be timeless, but the tools that we will use will very rapidly replace these things.
So I think our kids should be embracing and using all of these tools. The only tool that I restrict them from is TikTok. But otherwise, and that's for a variety of bad influence and Chinese Communist Party. But otherwise, I want them learning how to use every tool as they would every appliance in a kitchen. Outside of those tools, where do you think the advantage comes from? Is like a networking advantage and who you know become more pronounced? I think it's this.
I think it's human to human. I think that if you always can frame things as like what's abundant and what's scarce, in a world where there's going to be an abundance of access to information, an abundance of access to information,
creative construction of things, art and literature and movies. And just by the way, as an aside, I also will take boring PTA messages from our school and I will put them into large English models and send them to my wife where I'm like, do this in the style of Matt Levine, the Bloomberg Daily Writer, or do this in the style of Shane Parrish, or do this in the style of Al Swearengen from Deadwood. And it's just, it's absolutely entertaining and brilliant. And take something that's so boring
boring and cliched and hackneyed and like just brings it to life, right? I do the same thing. So it's a lot of fun, right? It actually makes this stuff interesting. But the advantage is going to come in, what do I do with that when I output it? You know, I enjoy it for a second, but then I share it. And so all of these things, all the value, all the market capitalization of all social media is about sharing. And I still think that we're going to produce, we're going to share and
What becomes scarce is this, is like human connection because we are still human and we still want that. We want to be hugged and we want intimacy and we want to laugh with each other. I mean, I'll share just a quick aside from this. Like two or three months before Danny Kahneman died last year, Lauren and I went over with a filmmaker and another woman and we had dinner with Danny and his partner, Barbara Traversky, who was Amos' wife.
And we were talking about aging and getting older and memories. And Danny had this great point about that the pleasure of pleasurable things got less pleasurable, but less than the pain of painful things got less painful.
So for him, the loss of a friend, you became a little bit anesthetized to it. The first time you lose a friend, it's tragic. But when all your friends are dropping dead, it's like, this is just happening. The first person you know gets divorced. And all these things over time, the half-life of the pain just decreases dramatically.
You're still losing the pleasurable thing. So food didn't taste as good and wine didn't taste as good and music didn't sound as good and sleep wasn't as good and sex wasn't as good and all these pleasurable things. But he thought that the pain was less painful than the pleasure was less pleasurable. And so I thought that was interesting inside. Barbara had a different view and she's still alive and I'm taking some license to share her view. But she said, no, it's still painful.
And the main reason she said that pain is painful is that these memories I have of Amos or I shared that moment with this person. And the great human feeling is commiserating about the thing that we experienced together or the memory and laughing hysterically, which I still do with my childhood friends, of this shared moment. And AI will never get that. Another person won't get the inside joke. And so
To your question about the advantage, the advantage is in that human connection because we are still human and we want that, we pine for it. And so I thought that was a really profound counter to Danny's view, which is that when you lose these people, you lose the partner to amplify that emotion, a good one or a bad one. And I think that being able to have like uniquely human experiences, understand each other, support each other, that that is still going to be an advantage.
A lot of the things you said, what sort of the common thread with them in my mind is we feel part of something larger than just ourselves. Yes. Like relate this to working remotely or and how this interacts with other things, right? Where we might not feel part of something larger than ourselves. And remote work is a great example where you sort of
The ability is there, whether you do part-time or full-time, to shut off your laptop and the world looks like you. You're not forced to interact with people with different political views, different socioeconomic status. You surround yourself so you don't feel a part of something bigger. And that changes how you vote. It changes a whole bunch of things in your life, I would imagine. I'm speculating here, but I'm sort of like thinking out loud. No, I think a lot of your values, a lot of our values change.
And again, I'm going to invoke Danny here in a conversation before he passed away, was that you may think you think the things you think because you analyzed and you reasoned or whatever. But no, the reason you think the thing you think is because of the five or six most important people around you. And
They sort of believe something and you believe them. And so, again, an information contagion. And you will tell yourself that you believe it because you really thought deeply about it and you reasoned through this. But, no, the reality is you believe things just because there's this social phenomenon. So working from home versus working in person, there are so many. Everybody is here Monday through Friday now.
You know, at first it was Monday through Thursday, take Friday. Now I'm like, look, if you need to leave, you go. Family first. It's a principle here. Never miss a concert, a recital, a science fair. But we need to be together. Why? Because there's so many interstitial moments. There's a chance.
serendipity moment because I come out of a meeting and I'm able to introduce you to Grace or Brandon is meeting with somebody and every day, hey, do you have a minute? You know, knock, knock, and then somebody's making an introduction and you just never know what it unlocks. That never happens on Zoom or on calls. It just doesn't. The structure of that doesn't allow for that serendipity and those sort of human connections. The ability to really feel when somebody swallows when you ask them a question and they're feeling nervous or like, hey, is something going on? You know, and
And they're because everybody's fighting some epic battle. You know, they've got relationship issues. They got parents issues. They got a sick person like and we are just we're still human. So I feel deeply that we should all be connected in person. And I do think that that's an advantage in a world of abundant AI and sort of cold, sterile, even if it has the simulacrum of and again, a lot of people will use AIs in a beneficial way to.
share things that they might not even be comfortable sharing with a person and have a consultant or therapist to tell them. But I'll give another example, which is another colleague here moved from one city to another and he happens to be religious observant, but is an atheist. Okay. But he moved to this new city and he found a tribe that
that he's like, I immediately plugged in. Like we didn't know anybody. We didn't have any friends. But by being part of this religion, we instantly had friends and peers. And I was like, not that it's cynical or selfish or, you know, we could put a valence of meaning behind it. But really at the end of it is like,
Will you help me? Is there reciprocity? And that's this like ancient sense of whether it's transactional or not, it's overt or not, you're signaling the depth of the sacrifices that you would make for the group. This idea that something is bigger than yourself
Belonging is arguably like the most pleasurable thing. Having friendships, you know, you do all these, look at all these studies of people that age and what was meaningful to them in leading a good life. And being ostracized or feeling rejected or left as like the most painful thing. So I think that that's a timeless human truth. And I certainly feel like, and I encourage this with my kids, by the way, I don't want them just to have a group of friends in school.
Because just like the diversity that you need in a portfolio, you need to have hedges and all these other things because maybe something is not going right in that friend group. And then you are more at risk of catastrophizing that, oh my God, like nobody likes me. And, you know, and so-
If you're in a soccer team and you go to a religious group or you're in Hebrew school or you're on a dance team and you have a neighborhood group of friends and my kids are in like six different things outside of school each and they then can bring these people together, which itself adds this feeling of like, oh, I connected people and I'm this node and it sort of cements the network in a way that is profound and meaningful and comforting. Yeah.
I like that. I want to come to aging a little bit here. I don't want to do it. I don't want to age. Talk to me about this because I feel like based on the research happening now, the amount of money going into this, the progress that we seem to be making, at least on biomarkers that we understand, we seem to be able to dramatically slow at best our aging process.
You think we're going to make a quantum leap in term of average lifespan for humans, maybe adding 10 or 15 or 20 good years in the next 10 years? I think that's possible, 10 or 15 or 20, not doubling. I mean, we did that in a few generations. People would die at 40 years old or 50 years old, and people now regularly live until their 70s or 80s. This is an interesting thing because we don't fund longevity work here, and I'm personally not
invested in any of these things. Uh, I will go to my doctor and, uh,
I will get my blood tests and he will suggest that I take some supplements because I'm low in iron or this or that. And that's entirely reasonable just to maintain sort of homeostatic function. But I don't go absolutely crazy and intense, but I appreciate the people, Brian Johnson and others that are self-hacking and doing this and pursued. And Ray Kurzweil was doing it back in the day. Nobody has really seen Ray out that much. You know, he was taking like, I don't know, a hundred supplements a day or something like that. And
Last I saw, I think he had a toupee and like it was like a messy situation. But I'm glad that these people are doing it both in pursuit of staving off their own mortality and a public service either interpreted as I can't believe you go to these extremes and I'm not going to do that because it's super stressful or maybe you're going to unearth something and we're all going to be on metformin and all these, you know. But there are this timeless pursuit of avoiding death.
It's very human. It is. And it goes back, like the first form
of avoiding death was, "I'm not gonna die." So the search for the fountain of youth in Ponce de Leon, today modern pharmaceuticals and drugs and supplements and lifestyle changes. The second form was, "Fine, I'm gonna die, but I'm gonna come back." And so reincarnation, and maybe that was the spiritual or religious sense. And then today's version of that would be Alcor or any of these cryo, I'm gonna freeze my brain so that when they figure this out, they'll bring me back.
The third was, okay, fine, I'm going to die, but I am more than my physical self. There's an ethereal soul. The modern technological version would be the ghost in the machine, endless sci-fi movies about people uploading themselves and their likeness, which, by the way, are sort of a really interesting phenomenon of how we deal with loss and the ability, if there is an AI that is totally trained on my voice and my likeness and everything I've ever said, which I have done,
that my kids would actually have a dad AI and maybe they can consult it for questions or, and is that a good thing or a bad thing? I don't know, but it is going to be a thing. Then you have, okay, I'm going to die, but I'm going to live on through my progeny, through my children, through my genes, which is the evolutionary impetus. And I'm going to live on through my works. And you won't be there to experience either of those things, unlike the first three where you're not going to die or you're going to come back. And so I think about the people that
Whether it's where I grew up in Coney Island, Brooklyn, they put graffiti on the wall and they put themselves up until it gets washed away. Or if you're Dave Rubenstein or Steve Schwartzman, you put your graffiti etched in stone on the New York Public Library. But it's really no different, just $100 million instead of free and the potential for being jailed. And then it's through your children. And I think there it's very Buffett-like, the moral –
sort of mandate would be like he described Don Keogh of Coke that when you do die, you want them to say what they said about him, which was everybody loved him. I don't have that. Like not everybody loves me, but my kids, you know, is most important. I think the theory was the people that you want to love you, love you. Which is interesting because the people that we celebrate the most, if you think about Steve Jobs,
And even Elon, like, are the people that are closest to him. I don't mean millions of fans that don't actually know him. But I used to have this debate with one of my best friends about Steve Jobs. Like, the world loves Steve Jobs. Yeah. But, like... People in his orbit didn't always love Steve Jobs. He's, like, terrible, you know? Like, he's so mean or, like... So I think that's really interesting. But going back, the common thing amongst all the people that have tried to defeat death...
From the people that weren't going to die through Found of Youth or Modern Biotech, the people that were going to upload themselves, the people that were going to come back, the people that leave it to their kids or to their works, the common thing amongst all them is that they're dead. Nobody has beaten death. And so the mental model that I like on this is, okay, take a piece of paper and put the day you were born on the front and then the day you're going to die, roughly plus 80 years on the back. And the only thing that you may have control over in part is the story that you write between these two pages.
And my brother-in-law passed when he was early 40s, stomach cancer. You know, he lived a tragic short term, you know, relative to others. But maybe you get to live this epic term and it's like, how do you write it and who do you spend your time with? And nobody's going to look back and say, you know, I wish I would have taken that extra meeting or done that extra business trip or something. It's like, I'm really glad that I was there for my kids or my spouse. And yet you...
work really hard. Yeah. And I always prioritize my kids. Like I think about all the time, like their judgment, you know, some people are like meet their maker or the, you know, because I don't believe I care deeply. Well, they say my dad was there for everything. And in part for me, because my father was not present in my life, my parents split when I was super young and
It's my little guy who's I have two daughters and son but my son who's nine always wants me to have a playdate with my dad you know and and We're civil and we speak a few times a year, but I'm like it's just not that relationship I'm not gonna have that you know and he's like yeah But I really want you guys to and I'm like no I get to be the dad that I am to you Because I didn't really have that and I'm making up for it now and this is what it is and then I worry that
If they see such a present father, do they take that for granted? Dude. And then do they screw it up in the next generation? You know, and so I have the same thoughts. I actually talked to a therapist who had this because I was like, am I too present in their lives? Like they need some space. You know, I'm home and they get home from after school. And my wife and I talk about this like.
My parents split. My father was married four times. All I wanted was a stable nuclear family. But if my kids grow up in a stable nuclear family, do they take it for granted? And does like one of them become a cheater and infidelitous and all this kind of stuff because they, and I have no idea. But I know for me what is meaningful and what makes me feel good and is a totally selfish thing that,
is about solving for what I want, but selflessly, I think it ends up being virtuous. So what would your top like three or four priorities be then if you were to outline them? My kids and my wife call it family, number one. I mean, you know, you think about like the people that lost everything physically and materialistically in the fires recently in LA, like family is like, you know, and I don't care over time as they grow up and where they are and they're, you know, but that to me is like the most important thing.
Number two is purpose and meaning. I think this is a universal thing, but I feel lucky that I enjoy what I do. It's an intellectual puzzle. There's times of like great fierce competition. We're losing to, you know, I don't like to lose. We're losing out to another firm that's got an entrepreneur. I like the intellectual gratification of being right when other people are wrong. I'm very intellectually competitive that way and discovering something that people haven't discovered.
I always talk about Linus Pauling, the double Nobel laureate who won it for both chemistry and peace. And he has this quote about science, which I just absolutely love. And this is like, I hope until I'm 90 or 100 or 110 or whatever modern science lets me live till that I continue to have this because it's an addictive feeling, which is that I know something that nobody else knows and they won't know until I tell them. And I love that, discovering a legal secret that
and knowing that this is gonna be announced, that the scientific breakthrough is coming and nobody else knows about it yet. So that to me is like meaning and purpose. It's intellectually competitive. And I understand the intellectual competitiveness.
I want to be right, I want to make money, I want the credit for it, but it really is about like the status. Because otherwise you would just do these things in private and totally in quiet. But I like that feeling of that, even if it's vainglorious and ego and all, you know, vanity. So family, that sense of purpose driven by this intellectual competitiveness. And then I think I didn't appreciate this as much, but it's sort of adjacent to the first.
There's a handful of people who I imagine myself like retiring with or like guy friends and people that I enjoy spending time with that have the same sort of values. And they're very family driven. And so my cousin in particular, this amazing guy, Jason Redless, one of my wife's best friends, this woman, Molly Carmel. They're both we call it framily. They're like friends and family. But, yeah, it's a powerful thing that I don't want to ever lose. That's awesome. Yeah.
Um, you process a ton of information. What's that workflow like for information to get to you? How are you using technology to filter? How are you filtering information? So I typically go to bed between 12 and 1230, wake up around seven. Um, kids before they leave for school, about 40 minutes. I do a lot of physical activity, usually three days a week between working out, trainer, jujitsu, all kinds of interesting stuff. But, uh,
probably about an hour to an hour and a half in the mornings of reading through something like 40 different papers now. It used to be like seven, but then when I start to travel internationally, if I went to the Mideast, I would find an English version of some of the key papers, same thing in Japan and elsewhere. And so now I read a lot of international papers. And when I say read, I use an app called PressReader.
It has the digital replica of this specific version, which I really value. And I know we've talked about this in the past, but I like to know what the editor put on C22 that's not as visible on the website because there's meta information that the editor is saying, this is not important to be on the front page. But if I disagree and I think that there's a magnitude of informational importance, that to me is like some sort of edge. Then I will take screenshots of those. And so I will sort of
call it scout and scour through all these papers, take screenshots. In some cases, I may even take those screenshots and put them into an AI and basically say, give me a summary of this article or give me the three key quotes that really matter. And so I'll go down all kinds of like rabbit holes with that.
So that's the first, which is just like call it 24 hours worth of information that is basically put into editorial decree. You can usually get the FT in New York time, like 4 or 5, 6 p.m. Eastern time. And so you have a little bit of information edge because most people don't get the FT for another 12 or 14 hours, but they don't know that they could get it online. But that to me is valuable. And I care about all those things, including not the sophisticated newspapers.
But like the less sophisticated ones like USA Today and I want to know what is the average person going to read when they wake up in a Marriott and get the paper delivered under their door and that kind of stuff. Then Twitter. You know, I have all kinds of lists that I follow and at any given time it might be something that is geopolitical and war related where I'm going down a deep hole or sometimes it's AI and technology. Sometimes it's my team and what they're posting and reading about. But a lot on Twitter, which I find triumphant.
truly invaluable. I mean, I know a lot of people say it's like this dark cesspool of whatever, but you can just filter through and cut out the people. I'm muting and blocking people all the time. And I'm discovering all kinds of just absolutely incredible people. It has been, as you know, from one of our first conversations, like this idea of randomness and optionality, it is this huge randomness generator, this huge optionality generator.
And the accessibility and I just, I absolutely love it. So that's another thing where I'm really rooting for Elon in the continued success of X because I'll continue to pay a lot of money for it. And I pay more money for it, but I find it super valuable. And it's real time pulse. And I'm excited for Grok to continue because I think that Grok and X just continue to sort of, I mean, that's a repository. We talked about that before. Repository of information.
One of the first things Elon did was cut off access to Google from the data. And I think that's the right move, right? This is our platform, same way as Meta has. So I think that that repository of everybody's tweets and retweets and likes and the comments that they've made, you can already go on and do this in sort of a relatively superficial way where you can say roast me.
And it will basically off your past, I don't know, like 25 or 30 tweets sort of roast you based on what you've tweeted about. But their longitudinal access of people that have, I don't know, 10,000, 100,000 tweets is an amazing pastiche of like, you know, what, there's an interesting thing here, which we were just riffing on internally, which I'll come back to. Just remind me on sort of wrapping yourself in this information mosaic and breaking free from it.
But papers in the morning, Twitter, internal Slack, emails, texts, you know, just like processing all this information. I use Rewind on my Mac.
which is effectively doing nonstop screen capture. And there will be other tools like this in part because I do not remember the source when I saw information. It's sort of the same thing of like, if you see a show and you were to ask me today, like where did you, I don't know, it was on Apple TV. Like, was it on Paramount? Was it on CBS? Was it on Netflix? Like, I have no idea, right? And in fact, usually when I do the Apple search and it doesn't show up, it means it's on Netflix because it can't search Netflix, right? Yeah, just huge information omnivore, everything. And then-
there's some you know random writer that i follow that's like catherine schultz at the new yorker adam kopnik and people whose style of writing and the selection of their subjects i find really interesting then i'll go deep into some of their themes so you use rewind press reader what are the other like technological tools that you're finding super every ai i might take a essay
read it, ask it to summarize the key points, ask it to put it in different voices, take two different essays and say, where do these things agree or disagree? And so, yeah, like just nonstop. I'm on AI easily more than Google now, but I don't know, two, three hours a day. What have you learned about prompting that would help everybody get better results? Yeah.
Usually very specific, like I give it a priming thing like you are the world's, it's a neuroscience paper. You are the world's greatest expert in neuroscience. You have read every paper that has been published. You have both a skeptical eye to new claims, but you are also open-minded to interesting correlations that might not have been considered. Read this paper and give me the three most provocative, non-obvious points.
and give me the three cliche, you know, and so just, and by the way, I will put them into three different models at the same time. So I will open three different browsers, you know, arrange them and put it into chat GPT, put it into cloud, put it into one of the perplexing models. It's not running on those two. And, you know, sometimes I'll mix and match them. So yeah, it's, it's sort of like a palette of, you know, mixing. We have not yet done this as, as a partnership, but we've talked about it, having an AI partner. Um,
There's still a behavioral discomfort about recording conversations. You and I are recording our conversation now. But every partnership discussion we have, if we were confident that it was protected and encrypted, because we might say things that could be harmful. You don't want them coming out. It could insult somebody or like, or we have a piece of intel that we don't want out. But if we were comfortable that it was perfectly private.
which is a hard thing to promise, but if it was, you would have a repository of every conversation we've ever had over the past X number of years, the decisions that we wrestled with. You would be able to have somebody to advise us, an AI to advise us, where are we showing biases and inconsistency between a decision we made three months ago and this? What is different this time? Which voices are not speaking up? And you can already get this in some cases with like certain Zoom calls or other recording things where it'll tell you who spoke for how long.
And then you could run like a Bayesian analysis of, okay, given that we're looking at these two companies, give me the outside view, the base rate of success historically, which in venture honestly doesn't matter. But, and then give me a Kelly criterion of how you might size this based on the projected internal confidence. And so there's all kinds of things that we could internally do to use these tools, which I think over time we'll probably experiment with. But the biggest thing is basically having like
a capture of everything, you know, everything that you see, everything that you hear, everything that my, I've already given over again to the privacy gods, everything that my screen sees. And so I trust that that's siloed on my device. It's not going to the cloud, but it's super helpful when I'm trying to search for something. I'm like, was that a Gmail? Was that a text? Was that a thing on Twitter? Was that a PDF I read? Where did I see that? And the ability to DVR my life
is super valuable. If I could do that with my conversations, like who said that the other day? In fact, Lauren and I just had somebody over, you know, we host people at our house and we couldn't remember who told us this thing. And we were like, I had to go through my calendar to see who was over on Thursday or Friday. Oh yeah. Okay. It was, you know, but being able to search your life instantly, I think it's going to be a generational change in the same way that people were not comfortable, you know, posting on Facebook and then they were comfortable. And then like now people are like posting themselves in
swimsuits and bikinis and it just doesn't matter. That to me is going to be a big step change. I want to come back to the infomosaic, but one thing, we never talked about YouTube being like such a huge data source. Incredible. Closed and slightly open, I guess. Yeah. In some ways. Yeah. Like, well, I love that moment when I think it was Mira from OpenAI was asked like, you know, so did you train? And she's like,
But you want to answer, you know. Crickets. Yeah. Okay. The info mosaic and breaking free from sort of like one thing I do love about X is that it shows you views that are contrary to your own. Like the algorithm has gotten pretty good at it.
And there are, what is it, ground news that you can sort of do this where it will actually give you a bias on certain things and it'll give you both sides of the view. So if you truly are objective and like truly knowledge seeking, then you would want to experience that. And I feel like that will be an option that you just click and enable a feature and it's able to identify some of the biases and whatnot. This idea of the information mosaic was a recent conversation I was having with my colleague Danny Crichton who runs like our risk gaming stuff where we're coming up with all kinds of crazy scenarios and imagining these low probability, high magnitude events.
And the idea was that over time, this perfect simulacrum of Shane or of Josh is going to exist. Everything that I've ever said on every podcast, everything I've ever written publicly, forget about my private thoughts, but just everything that I'm out there publicly, my voice, my token. And so I almost imagined it like this matrix, like mosaic, like a Spider-Man costume. That's like form fitting. It's me or a close approximation of me. But what if you want to break free from that?
In a sense, if I said, give me something in the style of Shane Parrish, it might conjure something in the style of Shane Parrish, but in the style of Josh Wolfe or in the style of David Milch or Christopher Hitchens, you know, I actually love invoking dead voices, you know, to sort of bring them back from the dead, right? And have them opine on the topic. What would Christopher Hitchens say about this article, blah, blah, blah.
what if I wanted to break free stylistically? If I said, give me a image of a horse in Tribeca in the style of Wes Anderson. You know, I can imagine the pastel palettes that it would conjure and you can imagine that too with the, you know, rectilinear framing and whatever. But what if Wes Anderson suddenly had like a new
stylistic change in his ovor and wanted to just shift. Like he'd be constrained, you know, in the same way that people hate when, you know, I don't know, maybe when Dylan went electric or like, you know, somebody else changes their style or their genre. And so there's this aspect where AI constrains you.
And just sort of playing with this idea of, you know, how do you break free in the same way that there might be like the right to be forgotten, that maybe you want to change your style. The great virtue of college for most people is this quartet of years where you can break free from who you were for the past four years.
And nobody knows who you were and what you cared about. And maybe you were into heavy metal, but you were in like the band, you know, and you couldn't break free. Or maybe you were gay and nobody knew or all these things that you can just suddenly like be yourself and explore new things. And there's this element where the great virtue of college is self-exploration against the constraints of high school, but not.
Could AI be this constraining force? Because the more content that you put into it, the more it knows you, the more you may have trouble varying from it. And so there's something interesting there. I like that a lot. Let's talk military and technology. And you guys are big investors in Andral. Where's that going in the future? Well, there's going to be a lot more brilliant minds, I think, that feel comfortable investing
motivated not only by a sense of purpose, patriotism, but also principle and capital making that they see the things that they doubted early on, like why is this time different? Another defense company of which there weren't very many, but seeing Andrel's ascendancy and valuation and success and program wins, I think has inspired a lot of people like, wait, there's something going on here. We went from 50 primes down to five. You're seeing the rise of these neo-primes.
I deeply believe that Anderle in the next few years will be a $30 to $50 billion publicly traded business doing single digit mid billions of revenue with software like margins that are not like these cost plus margins. So that is going to usher in a big wave and they're buying companies that are acquiring smaller businesses, but you'll continue to see that sort of evolution in a world that people realize is not kumbaya peace and safety. There are bad actors that when...
we take a step back or on our back foot or a little bit permissive that they arm up. It happened with Iran. And I think the prior administrations from Obama and Biden were well-intentioned in trying to bring them into the Western world. But, you know, it was a sort of ruse, you know, from an Iran standpoint, same thing with Gaza and Israel and Russia and who thought that we were going to see a land war, you know, in the 21st century where Russia would invade Ukraine. And, you
The China and Taiwan and North Korea and the African continent, as we talked about in the Sahel and Maghreb, infiltration of a lot of these groups into South America. I mean, there's just lots of conflict waiting. And the best way to avoid conflict is to have deterrence. And if Ukraine had nuclear weapons, Putin wouldn't have invaded. Most of the West and NATO, you know, really said, don't worry, we got your back. And even though you're not part of NATO and they never nuclearized, I think the world is
you know, through all of human history, is going to face enormous conflict, resource wars, water maybe next. I think there's something like 1,900 active conflicts around the world around water rights. You look at China and Pakistan, control of the water. I mean, there's just like a lot of resources. You look at...
Disrupting undersea cables, you know, sabotage efforts. You look at deep sea mining. You look at space as another frontier. There's just a lot of opportunities for zero sum conflict. And when you can't reconcile those conflicts through diplomacy or negotiations or agreement, it goes to violence.
And the people that can bring or affect or export violence typically have the upper hand. And part of what has made this country great and made it powerful, made it the economic juggernaut, is that it is allowing for the low entropy system, even though the country at times seems chaotic, that allows for the high entropy production of entrepreneurial ideas and free market capitalism and booms and busts
is that we have the most powerful military on the planet. You could argue that that didn't just benefit the United States. It benefited Canada, Europe. It benefited a lot of- Mexico, our allies, for sure. You can watch many fictional movies that have run these counterfactuals of what would have happened if Nazi Germany had won or the Russians had landed in For All Mankind before we did in The Moon. But we're getting away from an era of like, here's a trillion dollar boat
It depends who you talk to. Shouldn't we? I mean, if that boat can be taken out by a $3,000 drone, how effective is it? Yes, for sure. The asymmetry of a threat of an aircraft carrier against a large fleet of drones
It is very much, if you talk to Sam Paparo, who's the head of Indo-PACOM, he will say it is all about mass on target. There are certain things that automation cannot do. And he wants what he calls, which I guess is a technical term, a hellscape in that region, the Taiwan Straits and South China Sea, so that you make it really impossible for them to have sort of any military dominance.
But it is an era where it's about – you saw this again with Iran and Israel and Gaza and Syria's missiles and counter missiles and rockets and intercontinental ballistic missiles and hypersonics and space weapons. It is just about going back to almost like Planet of the Apes era.
One ape threw another, a rock at another, or a twig, or a stone. Yeah, the weapons get more powerful, but the behavior doesn't change. We're back to throwing projectiles at each other. They're automated, they're at speeds or at levels of attritable, overwhelming defensive forces that...
That is the battlefield. Do you think values become a disadvantage in some ways then? Like, for example, if the United States were, oh, we need a human operator to pull the trigger, and another country was, no, it can be completely automated. For sure. And therefore, in a dogfight, we're likely to more win. Look, this is already happening in the information space where we have certain... And in the autonomous space. I was...
In the Pacific region with SOCOM, and there's a drone operator who's flying the drone. There's another drone operator who's piloting the weapon system. And there's two lawyers.
So they're helping the commander who is effectively given like a God shot of, you know, how many combatants and civilians can be killed and what ratio. And sometimes it's like five to one or 10 to one. But there's lawyers that can authorize because we have a certain rule of engagement that frankly gives these military personnel the ethical comfort that this is a superior system.
But for sure, if there are people that don't have that same moral code, in some cases they can be at least temporarily advantaged. Well, you can think of that through AI too, not just military, right? If we restrict, we put restrictions on any technology and another country doesn't,
uh sometimes that can cause an advantage to another country china has the 50 cent army you know these people are getting you know 50 cents for every tweet and information they put out the state department when they want to tweet something out through groups there's literally like a disclaimer that says and it's like one woman in tampa that's doing this like this was sponsored by the state party like so we have these ethical restrictions which
definitely tie our hands behind our back in some cases. And our enemies will always try to weaponize this. So, I mean, you can look at many vectors today that don't seem like they're threat vectors, but they have been weaponized. Social media information, we know. And the best fix for that is identifying the bad actors and also inoculating people with a heightened degree of skepticism. But the vast majority of the American population will not be inoculated. They will see the things that they want to see. They will follow the accounts that they want to follow. And
And then occasionally those accounts will start to pepper in other information that they want people to believe. And that's how information cascades can go. We have open systems, immigration. You're seeing a lot of the rise of the populist anti-immigrant movement in part because in some cases it's a result of good intentions of providing sanctuary cities and wanting to help people and provide amnesty and help immigrants come here because that's what our country was built on. And then
You want those people to assimilate and when they're not assimilating. But then you also have bad actors like Putin who has weaponized immigration and put migrants on people's borders to create pressures so that you can get a political movement from inside the country that will be sympathetic to the nationalist sensibilities. And he's orchestrated that very well. And so infiltration into our university systems, which accept foreign capital and you see Qatar that is –
very massively domestic US universities. China doesn't allow that. You know, US is not able to come in and sponsor Chinese universities. TikTok, of course, a huge one, right? We banned it years ago, right when it had become TikTok for musically, in part because at the time I seemed like a conspiratorial nut saying, I don't trust this with the Chinese Communist Party having control over this. And then behind the scenes, myself and many others have played a role in helping to orchestrate what I hope will
will not be thwarted by Trump to see this divested. I have no problem with people using TikTok, but it should not be in the hands of the algorithmic control of the Chinese Communist Party. And it's on a lot of government phones. It's insane. It's really interesting. Why haven't we seen more isolated attacks that are cheap and using technology? And what I mean by that is, you know, a nefarious actor can probably for two or three grand effectively...
I guess the question would be to what end. And so you still have to realize that many people, even if they're like evil geniuses, have an objective in mind. And do they just want to sow chaos and create distrust in a system and have people scrambling? Or is there an opportune time to strike?
Think about the Israel operation with the Bieber plan. This was 10 years in the making. Now, they could have done it at any point in time five years ago, but they waited until a precise moment. And so being able to do the thing and deciding when you do the thing are two different decisions.
But I think that we've been warned for a very long time about hacking and infiltration into our physical infrastructure. For sure, somebody could shut down air traffic control. And what we saw just recently between the Black Hawk helicopter and this regional plane from, was it Kansas City? You know, crash in Washington, D.C.,
You could see the FAA shut down, you know, and have a glitch. You can have infiltration into our banking system. And, you know, just like the Sony hack, right? The big thing with the Sony hack was not that the systems were disabled. It's that information was revealed. You want to create civil war in this country, just reveal everybody's emails for the past year. The things that we've said about each other, you know, I mean, that like reveal truth in a sense, right? It was the great irony. So the obfuscation of these things in private helped to create a civil society.
Our water systems, our infrastructure, our traffic lights, you know, I mean, all the things that you've seen in sci-fi movies when like things just start breaking. I'm actually amazed that our infrastructure globally, but you know, even in New York City, even in this office, there's a million skews in this office.
Above our heads right now, there's an HVAC system. The fact that we trust this system and it's not going to fall and explode or blow up, and then we're shocked when these things do. But I'm constantly amazed that the entropy, the forces of entropy are constrained by either really good engineering or inspection of systems or whatever it might be, the maintenance of systems, which is another really interesting thing, this idea of maintenance. Yeah.
Like the past 10 years have been all about growth, growth, growth, growth. You go to financial statement on CapEx, you've got growth and you've got maintenance. And I think in a world where rising cost of capital keeps going up for a variety of reasons, I think there's a ton of dry powder and venture capital and private equity, but a lot of it I call wet powder because this money is basically reserved for companies and people don't realize. Reshoring, all of these things, you know, tariffs, they're going to be inflationary. There would be a rising cost of capital. If you have a rising cost of capital,
If you are a CFO or you're on a board and you're thinking about good governance and capital allocation, we're not buying the next new, new hot thing unless we really have to like AI today. Yeah.
We're thinking about how do we maintain the existing assets we have? And those assets could be satellites up in space. They could be military installations. They could be our telecom infrastructure, our bridges, our waterways, sanitation, processing, our HVAC systems, our industrial systems, all those things that need to be maintained. So I'm increasingly interested in new technologies. This could be software services, sensors, all kinds of things that can help apply technology.
to old systems to maintain them for longer, depreciate them for longer, let them last for longer. I think there's going to be increasing demand for maintenance of systems. But I'm amazed that everything around us is just not constantly breaking. It truly is like miraculous. It is when you think about it, right? Yeah. What do you think of Doge? The currency or the initiative? The initiative. We don't talk crypto. I think it's a virtuous thing.
because it's shining a spotlight on a lot of things that were just done because they were done and you get this bloat or in some cases there was like overt obfuscation and so i think sunlight heals all and putting a spotlight on ridiculous spending or ridiculous inefficient things you know i will say i grew up sort of a center-left democrat my entire life the first time you go to the dmv you become a republican like you know it's just like
You want systems that have competition because competition makes things better because if you have a monopoly on something, you don't have to improve. If there's one regional carrier for an airline, if there's one restaurant, if there's one place you have to go to for your passport, you don't want that sort of centralized control because the service is going to suck because they don't have to do any better. So I think if you can put
a spotlight on excess and waste and bureaucracy and at least begin the conversations at a bare minimum of, wait, we're spending how much money on what? I think that that's a virtuous thing.
Whether or not these things will be effective at really reducing costs, TBD, but it actually seems quite positive that they may hit some of their targets of trying to reach, what is it, a billion a day or more? And if that could end up reducing the deficit by 10% or 20%, let alone 50%, going from 2 trillion to a trillion, would be incredible. So whatever the motivation, I don't believe it's patriotism. It could be intellectually competitive. It could be power. Whatever the motivation, I think,
The means to the end, I think the end is a virtuous pursuit. If you were to take over a country effectively and you were in charge of policies and regulations, what would you do to attract capital and become competitive over the next 20 or 30 years? What sort of things would you implement? What would you get away from and not do?
Well, I have an adjacent answer of if I were Secretary of State or Secretary of Defense for the day, which I'll give you first, which is I would really put priority on Africa as a continent, and particularly that's a Helen McGrath, because I do believe between
between violent extremists, Russian mercenaries, China infrastructure, you are one terror event away projected into Europe that creates the next Afghanistan and suddenly NATO and the US are in there dealing with ISIS. You're already seeing, you know, the first authorized strike by Trump on ISIS in Somalia. Yeah, I saw that today.
And so you've got Sudan, Chad, Mali, Niger. Like it is just a hotbed of people that were coming from Syria and Afghanistan, Islamic extremists. It is a bad situation. And I think that we should be proactive there before we have to be reactive. And it's a lot more costly in lives and money and blood and treasure. The second thing I would do is a hemispheric hegemony declaration. I just went to Nicaragua, Nicaragua, uh,
For a variety of reasons, we went instead of Costa Rica, but I felt much safer in Costa Rica. And I was worried that I was not going to be able to leave Nicaragua. And we went with a friend who happens to be a prominent journalist. He was not allowed entry into the country. So it really threw our family vacation, his family of five, my family of five for a wrench.
Because the government is trying to take over the banking system and they don't want it to be covered by financial journalists and these kinds of things. And you look at who is in there and you literally have presence from Hezbollah, from China, CCP, from Iran. It's a bad situation. The places that we think in...
Most of Central America, Caribbean, South America are vacation spots where we get our coffee and we go on a nice vacation. Massive infiltration from adversaries. And so I think we are losing the game. And I would declare almost like a new Monroe Doctrine kind of thing where we say the entire Western Hemisphere, you've got a billion people, both...
ability to project into the Pacific and the Atlantic. You've got mostly English and Spanish speaking people, say for Brazil with Portuguese. You've got a ton of resources, a ton of brilliant educated doctors and whatnot. And I would just shore up this hemisphere, particularly against CRINC, you know, China, Russia, Iran, North Korea, and their influence. If we were worried about like Cuba and the Cuba missile crisis and things proximate 90 miles or what from Florida,
I think that China is doing very smart strategic things. So us going back in and saying, we're going to, you know, reclaim the Panama Canal and our influence on it, like, you know, forgetting about
provocations of mexico of like you know the gulf of america versus gulf of mexico but i think that having influence in that region is really important so those would be the two things that sect f or sex state that i would do is declare hemispheric hegemony and make sure that we shore up our allies in the region and get out our adversaries and their presence in part because there's so much commerce and money and infrastructure that's going in and then focus on the say hello mcgrub in africa for for country competition brain drain you want the best and the brightest you need
You need to fund basic research and basic science. It should be undirected. That's the serendipity and the randomness and the optionality that leads to great breakthroughs. You want capital markets to be these low entropy carriers for high entropy entrepreneurial surprise. So predictable rules and regulations. I would lower, I don't know why we don't have a flat tax. I mean, I know why we don't, but I would just have a flat tax, make it super simple.
Rich people are going to scout around and figure out how to get around the tax system anyway, and poor people are burdened by it. I get progressive versus regressive, but I would just simplify our tax scheme massively. Anybody that is coming here and getting an education in this country, I would staple a visa as long as they stay here or work for an American company for at least five years. Let them become, you know, we want the best and the brightest here.
We are, as an example again, and I don't mean to make this like all China, but they are our most dominant adversary, 50% of all AI undergrads in the world today are being graduated by China.
In our own country, in the United States, 38% of researchers in AI are from China. So we're outnumbered even domestically. It's a big deal. And we used to attract something like 23% of all foreign graduates here. That's down to 15%. People are either going to other countries or staying in their own country. And so we need that. That's what won us World War II.
You know, if Einstein would have stayed in Germany or... What causes that to happen? Is it the tax rate? Is it opportunities available? Is it housing costs? Like, what are the factors that go into people leaving? Well, start with the attracting part. You know, as Walter Wriston said, people go where they're welcome and stay where they're well-treated.
So we should be welcoming. Now there's a debate about immigration and, you know, we should distinguish between, you know, the bad people and like brilliant people and we should want them here. That just comes down to like basic vetting. Right. But, you know, some of that is exploited, you know, a lot of these consultants with Wipro and some of the, you know, Indian business process, BPO is the business process operations or outsourcing. Housing is a difficult one, but people can always figure out. New York City is expensive, but you can live in,
in Long Island City, in Brooklyn, in Queens. But I think...
- Yeah, housing availability. I mean, our cities are so rich and full with culture and people, and particularly if you're a young person, you wanna be around the density of that because you're trying to find peers and a mate and all of that. Even if you're from another country, you go to New York, you can find your enclave of Korean or Chinese or Russian or Ukrainian or Israeli and Caribbean, it's all here. So yeah, I think just having a culture that embraces this and encourages it,
You already have a robust venture capital system of risk-taking. Many other countries don't have that, so that's another thing. But if you were to design a system from scratch, you want openness with security,
So some means of vetting. You want a great education system that can attract people, that view it as a status to have graduated from that particular school. And people want to be around people, whether this is in a company or in a country, that are like them, that are competitive and highly intellectual, that they respect or admire, want to compete with. So that's number two. Their work, something we did in the 1980s, I think it was in 1980, was the Bayh-Dole Act, where government funding for research would allow the university
and the principal investigator to actually own the intellectual property that became an asset. That asset could be licensed to a company. It absolutely opened the floodgates for venture capital to be able to commercialize that. That happened to coincide with ERISA and allowing retirement plans to go into venture capital. So now you have a pool of risk capital, which you need for taking risk on unseasoned people and unseasoned companies. And then you need a robust
capital market system to be able to continue to allocate money. But again, capital goes where it's welcome, stays where it's well-treated, true of human capital, true of financial capital. And so a rules-based system, a strong military, if you were starting country from scratch, you're not going to have that, but you need great allies then. Think about Singapore. Yeah, I think that's a phenomenal model. Singapore is a great model. That's awesome. I hope some of the government people we have are listening to this.
I want to come to IP for a second and copyright and then wrap it up here. So do you think that AI should be able to create IP or copyrighted material? Like if I tell AI to write a book, should that be copyrightable? And who owns the copyright, the AI or me for the prompt? It's super complicated because, you know, the first debate about this, which is the great irony, right? Because
OpenAI investors and stakeholders were up in arms that R1 stole from OpenAI, but you can make the argument that OpenAI has trained on the repository of like the public internet and, you know, every art that's ever been produced and whatnot. Now, if you were an art student and you went to the Louvre or to the Met or to MoMA and you sat there and studied it or took a picture of it, and then we learn through copying. We learn through imagery and imitation and we remix these things. And
There's this great, what is his name, Kirby, who did everything as a remix. I just sent it to a friend, but it was updated last year. It's so brilliant in its compilation of every facet of culture that you love, from books to your Tarantino movie to the Beatles to art to scenes in movies like...
it was all copied from something, you know? And you're like, wait a second, that riff came from this 1940s song from this African-American blues guitarist that...
John Lennon or Paul McCartney stole. And so everything was sort of stolen from somebody. It was imitated, tweaked slightly. And by the way, that's what we are, right? I mean, you get two people who exist and then there's this genetic recombination of their source material. And every one of my kids are different, but they came from there. And so remixing is like how everything happens. And it's like Matt Ridley said, ideas having sex. And so to your core question,
Yes, I think that if I do a calculation and I'm using a calculator instead of like doing math by pencil, that calculation is still an input into my output. If I'm using AI to generate art and it's my prompt instead of
the gesture of my brush and, you know, the strokes of my hand. And I think it should still be mine, even if it was trained like a great art student was by staring and learning and studying and then emulating. And then these things evolve. You look at Picasso through all the different phases of his style of art, you know, from like realism and portraiture to cubism and abstract. These things just evolve until you find the white space that defines you. And that goes back to like
If I train all my AI and everything I've ever wrote, but then like my voice is, is a new voice is rare and hard to create. So, um, I actually think we should probably worry more about how do you break free from this constraints of these things than, um,
You know, should they be copyrightable? Well, that goes back to our earlier conversation. Do we just end up in this lane that we can't get out of? Or we don't even recognize we're in a lane, I guess. Right. In some ways, even more devastating. And the brilliance of all this, again, like I'm a big believer that we make our fictions and our fictions make us. And if you've watched Westworld, I don't know if you've... Watched an episode or two, yeah. The first episode, you know, you have a guest who comes to the park and...
And he's sort of squinty-eyed looking at the host, who's the actual robot. You learn later on, but he doesn't know at the time. And he's like looking at her and she goes, you want to ask, so ask. And she knew what he was going to ask her. And he goes, are you real? Yeah. And she goes, well, if you have to ask, does it matter? And, you know, I'm going to sort of spoiler alert on Westworld. It's all about these hosts interacting with the guests. And they're there to serve the guests. But in fact, it's the opposite.
Because every host is watching and learning every small nuance, every gesture, every inflection of your voice, every cadence of your speech. And it's learning you so that it can basically create a perfect simulacrum of you and 3D print biologically a version of you. And so it's a really profound philosophical question about how we're interacting with these things. But all these things have been trained on the sum total of all human creation.
And now they're being trained on the sum total of human creation plus artificial creation. And some of that is done with human prompts and some of it is going to be done automatically. But I just think it's going to be part of the total overture of creation. And I think it's a beautiful thing. Does anything about this scare you? About AI, like the direction we're heading? I think in the near term, the thing that scared me is...
What, again, scarcity and abundance, what becomes abundant is people's ability to use AI to produce content. And I don't know if I'm getting an email from somebody. Did an AI write it or optimize it or was it really a thoughtful note from John? This young college student who's persistent, was it really them? And can I infer something about their persistence and their style of writing or did they put it into an AI and no...
from the repository of what influences me and what I've talked about and what I care about. That they, you know, so many people are like, oh, I heard you on this podcast and I felt compelled to write to you because I too care deeply about family and you know, blah, blah, blah, right? I mean, those are surface level stuff, but somebody that's more nuanced about it. Am I being manipulated by them or by the AI? And if it's them, there's a cleverness to it that I might admire. If it's just the AI, I feel suddenly more vulnerable. So what becomes abundant is
the sort of not just information, misinformation or whatever, but the production of it, what becomes scarce is veracity and truth. And that to me was less scary, but more you need to be inoculated and immunized, vaccinated, and you're almost gonna become a little bit more distrusting. But like your reactions right now, I might say something and you might say, "Oh." And maybe you actually thought it was profound or maybe you're like,
This is not interesting at all. But there's something authentic, right, about this. And we are reading each other and reacting to each other. That to me is going to become ever more valuable. So our humanity, the interactions are the scarce thing, even if and as through other mediums, it's hard to tell.
I love that. I always love talking to you too. So I get so much energy and ideas out of our conversations and I'll be chewing on this for weeks. I know we always end with what is success for you? You've answered this before. I'm curious to see how it changes. It really is the eyes of my kids. It is for me, them saying my dad did that or my dad made that or my dad was present for me. And I think it's the story I tell myself about my own life and
my relationship with my father and wanting to invert that. And so for me, success is like them saying, I'm proud. He was my dad. He was a great father. And I'm proud that he does all these things. And when we fund a company or like some of these secrets that I talked about, I share them with my kids. And so I was taking my middle daughter to my oldest plays tennis. My middle does soccer. My little guy plays basketball like 10 days a week. He's better at nine years old than I was at 19. And I was reasonably good. And
And I like sharing these stories. So I'm like, you know, next week there's a story that's going to come out about this particular thing and nobody knows about it except the company and now you. And they're like, oh my God, really? And I'm like, yeah. And like, you can't tell anybody, you know? And I just, I love that feeling. That's awesome. And I do it in part, not because I want them to learn about it, but I want them to be proud of me as selfish and vainglorious as that is. And to be like, oh, my dad's cool, you know? So.
I think you're cool. You're not my dad, but man. I'll tell you, my 15-year-old daughter definitively does not think I'm cool. She says, you are so cringe. I think everybody's kids say that, right? It's the same with my kids. Instead of telling them something, sometimes they might be listening to this, but I'll get my friends to tell them, and then all of a sudden it holds weight. But if I tell them the same thing, they're like,
Same thing with our spouses. Thank you very much. Shane, always great to be with you. I admire what you've built and the repository and compendium of the ideas and the minds that you've assembled. It's like a great thing for the world. Thank you. Thank you.
Thank you for listening and learning with me. If you've enjoyed this episode, consider leaving a five-star rating or review. It's a small action on your part that helps us reach more curious minds. You can stay connected with Farnham Street on social media and explore more insights at fs.blog, where you'll find past episodes, our mental models, and thought-provoking articles. While you're there, check out my book, Clear Thinking.
Through engaging stories and actionable mental models, it helps you bridge the gap between intention and action. So your best decisions become your default decisions. Until next time.