We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Bitcoin’s Bull Run & the AI Arms Race: What You Need to Know w/ Salim Ismail | EP #166

Bitcoin’s Bull Run & the AI Arms Race: What You Need to Know w/ Salim Ismail | EP #166

2025/4/23
logo of podcast Moonshots with Peter Diamandis

Moonshots with Peter Diamandis

AI Chapters Transcript
Chapters
This chapter analyzes Bitcoin's price volatility and potential future outcomes, either reaching a million dollars or falling to zero. The discussion also touches upon the role of AI in understanding market trends and the importance of long-term investment strategies.
  • Bitcoin's price is highly volatile, with potential outcomes ranging from zero to one million dollars.
  • AI tools can help in understanding market trends and making informed investment decisions.
  • Long-term investment strategies are crucial for Bitcoin, as timing the market is difficult.

Shownotes Transcript

The price of Bitcoin is back up above $90,000. It's pretty binary. Either Bitcoin goes to zero or it goes through a million dollars of Bitcoin. There's no real middle ground. The only question is when either of those happen. ♪

It's not that we've just gotten smarter. It's the tools that we have. It's AI that's going to help us understand what's going on. You'll soon have a Jarvis type personal AI that will have access to all of that sitting next to you. Google's got access to all of its Street View data, massive amount, Google Earth, YouTube.

All of that is very real-world data that can be trained on. Also, we're not even touching the deep web. We have so much data in databases, right? The amount of information on the crawlable web is very limited. The speed at which this portrays acceleration over the next five years is even hard for me to fathom. Now that's a moonshot, ladies and gentlemen.

Everybody, welcome to Moonshots in our episode of WTF just happened in tech this week. I'm here with Salim Ismail, my buddy. Salim, good morning. It's an early morning here. We're recording this, but a lot's been happening in the tech world and excited to get it out. How are you doing today?

I'm doing great. And there's so much happening. It kind of gets overshadowed with all the chaos happening in the global world. But the tech world is moving unbelievably quickly. Yeah, no, for sure. And while the I don't want to say it, but I do believe the tech world's far more important than the final result for the long term.

Big time. All right, let's jump in. You know, one of my Strikeforce members, Max Song, just landed in Beijing for some meetings yesterday.

And he sent me this photograph on the left. And this is what you see in the Beijing airport. It's basically China going all in on robots and AI. And then what you see at JFK airport, which I recently went through, is basically fashion ads. And there's something here that is important just to point out, right? I'm...

This is part of China's growing culture is super tech forward.

What do you think about this? I think that's exactly right. And, you know, they're facing a massive population crisis. So they actually need the robots to automate the workforce. Otherwise, there won't be anybody left to do the work over the next decade or two. So they don't have much choice. But for me, the underlying irony here was that the ads for Ralph Lauren or say Gucci or whatever. One of my boys here. Hi, Selene.

He's your godfather here. The underlying thing here, all the Ralph Lauren or Gucci or whatever, all the handbags or the Birkin bags are all made in China anyway. I thought that was kind of an interesting segue for this particular slide. But they're focusing heavily on it, and they have to, and it's going to be amazing to see as they roll that out. It's going to infect and spread across the whole world, that paradigm.

We hear a lot about Optimus and Figure here and Digit and Apollo and X1,

There's an equal, probably greater number of robots under development in China because they are, the government is really, you know, supporting the development. I think we're going to start to see this. In our last couple of episodes ago, we talked about the Google wing, whether you can deliver something by drone, right? We're all like, oh my God. And I got a ping from one of my people over there going, we've been doing this for years. What are you guys talking about? So it's like, so yes, the future is here. Just not evenly distributed.

This is another one that I wanted to share here today. And those of you who are listening versus watching, this is a graphic on the latest AI models IQ test results. And this is a distribution of human IQ.

that goes on the far left from 50 to the far right to super genius of 160. Of course, the average human IQ is 100 by definition. And what we've seen over the last couple of years was the rise of the large language models on this IQ scale. About 18 months ago, it was Claude 3 that reached 101 IQ first.

And then we saw GPT-01 get to, I think it was 120,000.

And on this distribution curve, what we're seeing here is, again, OpenAI leading the way with their O3 model at somewhere like an IQ of 133. Gemini just behind that, Gemini 2.5 and IQ like 127. Pretty extraordinary. Yeah.

What do you think here? I mean, you look at that spectrum and you're exactly mirroring the global human collective, right? A few on the right, a few on the left, and a cluster in the middle. The big difference, of course, is AI will continue to shift towards the right, and humans will be mostly stuck in the middle with all of the archaic things that we

that we consider and deal with our little one liter, one and a half liter brain in this small cavity. It sounds like a little Fiat car with a little engine in it. Just some references here were again, the O3 models that looks like 133 on this map. Obviously it's not accurate or exactly accurate, but

But a genius level IQ on Mensa, I think, would you say, Salim, is like 140? Yeah, Mensa candidacy comes in at 140. That's considered genius level. I think somebody mentioned, Donna mentioned that Einstein had 160, right? I just want to do my normal commentary here and say that this is great, but it still feels to me that there's so much more that we could be thinking about in terms of

measuring decision-making, emotional intelligence, spiritual intelligence, et cetera, et cetera. There's so many other classes. I know we have a couple of commentaries on the slides, so I'll do it later, but the IQ test is one piece of it. It's great. We'll all have a genius in our bedroom. And what's great about this is typically if you want to deal with somebody with 140K genius, they have no patience for fools and they're hard to deal with socially. Whereas the AIs will be easy to deal with socially because you'll be able to train them that way.

So that's the most exciting part for me around this. Yeah, and I think one of the points you made earlier that's important to realize is there is no artificial limit

who has IQ becomes more intelligent, it just continues becoming more intelligent. And there's going to be a point at which the idea of a Mensa IQ score is meaningless as these things, you know, hit IQs of 200, 500, a thousand, God knows what that means. Yeah. And do two AIs of 160 each add up to 320? That's a question I'd like to ask them. Everybody, I hope you're enjoying this episode. You know,

You know, earlier this year, I was joined on stage at the 2025 Abundance Summit by a rock star group of entrepreneurs, CEOs, investors focused on the vision and future for AGI, humanoid robotics, longevity, blockchain, basically the next trillion dollar opportunities. If you weren't at the Abundance Summit, it's not too late. You can watch the entire Abundance Summit online.

by going to exponentialmastery.com. That's exponentialmastery.com. All right, let's go on to our next slide here. The question is, and I'm often asked this, who's leading the AI race?

right and uh there are two answers worth uh worth pointing out uh the first is today on almost every metric google's gemini 2.5 is dominating and here's a slide i just put together with ai analysis intelligence index we see you know again these are these models are all so close but

Gemini 2.5 is out in the lead, the output tokens per million, the price of input and output. And then, of course, the most interesting metric, at least from a conversational standpoint, is called humanity's last exam on reasoning and knowledge. I find this fascinating. What do you think about that?

I mean, look, at some level, we should be human beings should be very bad at this, because if you look at the aggregate knowledge of human beings, scientific inquiry over the centuries, there's a staggering amount of data that we that we have in

in the world. I remember doing, coming up with seeing a random list of 12 doctoral theses, right, that were defended at my alma mater, Waterloo. And I couldn't figure out for half of them what even the subject area was. They were so detailed and specific, right? And so the fact that an AI has instant access to all of that is incredible. And we will be able to answer any question. And I think I'll go back to the point that you'll soon have a Jarvis type

personal ai that will have access to all of that sitting next to you and can answer for any any question and when you look at what humanity's last exam it's a a list of almost random uh test questions across quantum physics and archaeology and biology and it's the sort of uh

the sort of exam that you have nightmares about later on. That's right. I might actually be able to pass on my thermodynamics exams. Oh my God, you still have dreams about going back and like, I missed that class and the finals are coming up. I'll give you a quick anecdote here. There was one exam we had

It was a three-hour exam, okay? And the exam question was, a satellite at altitude A is orbiting the Earth. There's a river underneath that's flowing north to south. Figure out, because of the rotation, one bank of the water on one bank of the river is slightly higher than the other. Work out which bank and by how much.

And I had to like, there was like two lines in this exam and I had to turn it over going, sorry, I think I've missed a page. Where's the rest of this exam question? And that was it. I radically had to then assume a satellite orbiting a thing, a and, and work out or trying to still having nightmares about that. It was, it was just a horrible exam. It must be, it must be. I don't ever want to encounter. And this is why you need the AI. I said, yeah, you work that out for me and come back to me with the answer. Yeah.

Right. So today, just to summarize here today, Google Gemini 2.5 is dominating, at least in performance metrics. But here's another metric, which is revenues. The business side. Yeah. In this category, OpenAI is trouncing the competition. So, you know, you got to give them unbelievable credit, right, for democratizing and opening up the creating a total category out of nothing.

And the fact that they're making this much money is just so, so awesome. It should be an unbelievable testament for any startup founder saying, could I make a difference in an area where you've got Google, Microsoft, Meta all playing? And these guys come along and completely crack the whole thing open.

and are actually dominating on the revenue side. I think it's just a great testament to the beginner's mind, the founder mode, all of that stuff, why startups will always be the best, from now on will be the best mode of building and bringing new ideas into market. So let me ask you a question here. There are two points I want to make on this one. The first is,

that if you remember, you know, Google really was in the lead on AI ahead of everybody. Yeah. And they chose not to roll it out on the open internet because of safety concerns, right? It was sort of an unspoken point that, you know, AI needs to be properly controlled. And then open AI comes out and is just like lays it all out there. And Google is playing catch up. So I'm curious if

How much of this is first mover advantage? The second point is I spoke about in my book with Steven Kotler, I think it was in bold, the idea of a user interface moment.

The idea when a piece of software makes a complex technology so easy to use. And the very first user interface moment that I noted was Mosaic when Andreessen put Mosaic as a browser on top of ARPANET. And then all of a sudden, the number of websites explodes. And ChatGPT is a user interface moment on top of the GPT models.

I think that's right. You're talking about when you go from deceptive to disruptive, right? There's an inflection point in usability. The two that I use the most is the iPhone made the smartphone kind of usable. The Nokia's were pretty clunky before then. And Coinbase made Bitcoin purchasable easily.

With a click of a button and boom, it took off. So when you can make a complex technology simple in usability, if you look at, say, NFTs, it's very complex to kind of buy an NFT. It's still way the usability is way off and therefore it hasn't hit mainstream yet.

This is the hardest part of technology is making something deceptively simple. Right. I remember when, when we were designing products at Yahoo, I've talked to graphics guys, they would spend like the, the, the graphics designer guys would spend hours and hours and hours trying to figure out how to reduce the pixels on the screen and just move it a little bit over. And you go, what the hell is such a big deal? But it turns out to, there's an unbelievable,

a big effect. Just a quick story here. When we had the Yahoo mail homepage, it turned out if you move the send button by five pixels over to the right, usage dropped off a cliff. Oh, come on. It's true. We had the data. They were like, we can't change this goddamn interface because people are so used to having it right there.

that they click it and then they move to a different screen because they think they've sent it and then they get pissed off later. So then we can't move that send button ever. Once you've got it anchored in the usability of the psyche of the user base. And so it's just such a weird psychological thing that goes on. Therefore, you almost have to have a totally new model like OpenAI has to be the one that cracks it open. We've seen this repeatedly.

There's a reason that the electric car was created by Tesla and popularized by Tesla and not by the major car manufacturers. They're all coming at it from a car with sensors rather than software with wheels, right? And so this is...

On this chart here, what we're seeing is this is the end of December 2024, right? And this does not even include the massive gains that OpenAI has seen in the past four months. But we're seeing OpenAI at like $2.5 billion of revenue and Gemini at just under a half a billion, right? Five times less revenue for Gemini and then Anthropic below that. This reminds me very much of what we saw with

Google and Bing in the search space, right? Where there's people just become, you know, we're, it's interesting. We humans tend to like pick something and stick with it. And the cost of changing is so difficult.

Yeah. And, you know, they declared Google a monopoly and Eric Schmidt would make the point that, look, there's five other search engines out there. Anybody could, we're one click away from obscurity, right? We have to stay on the cutting edge and you've got to give OpenAI credit for rolling out new features at a constant basis and iterating the product very fast. They recently announced all the memory stuff, which I think is really cool.

Yeah, that is interesting, right? So there's basically infinite memory where OpenAI's systems will remember all of your conversations. And one of the fun things to do is to go on OpenAI and on ChatGPT, you know, the O3 model, whatever model, and say, tell me about me, right? And no, but seriously, it's...

I did that on Grok as well. And Grok was, I don't know about you. I'm saying, yes, you do. And he says, well, you have to give me permission to look at your ex posts, which was interesting. I would have imagined that Grok would not have had that requirement, but it did.

All right, let's move on here. One of the big areas that that Google slash alphabet is leading with deep mind is the whole area of the impact of AI on medicine and biology.

And there was recently a 60 Minutes episode where Demis Hassabis, actually Sir Demis Hassabis, since he's been knighted, or Dr. Hassabis, as the case may be, was interviewed. And the conversation was around the impact of AI on humans.

uh, on disease, ending disease and leading to radical abundance. So I love the fact that the term abundance is now becoming, uh, sort of the topic du jour. I don't know. Did you see the CBS interview?

I did. And I think it goes right online with the conversations we've had, right? When you have all the data coming off our bodies, like we used to measure the human being with four metrics, heart rate, blood pressure, glucose levels, maybe, you know, and now we have like 40 different streams of data via all the wearables and your coherent state and your VO2 max.

and Lord knows what. And once you pour that into an AI and it starts correlating that with different medical conditions, it's going to do a hundred times better job in real time than any doctor could ever do. So now you've got a real time AI doctor living with you inside you. This is like game changing for catching stuff early, which is 99% of the deal for some of these endemic diseases.

and then finding amazing treatments for breakthrough things along with CRISPR. This is why I think the conversation that we had last week with Ben Lam is blew my mind. I'm still reeling with that conversation because they're building all the fundamental tool sets to go and edit DNA and edit genomes and edit cells and all the biological hacking and make a complete suite of tools

Where the human body with 50 trillion cells that's governed each cell by the DNA is essentially a software engineering problem. And that's just a huge paradigm shift. By the way, if you're listening, you haven't heard the interview that Salim and I did with Ben Lamb, the CEO of Colossal.

please listen to it. It's extraordinary. You know, we talked about the dire wolves being brought back, but that's a minority of the story. We're going to talk about, uh, you know, synthetic biology, the impact on the ecology, what it's going to take to bring back a, you know, dozens of different species. And can you bring back dinosaurs? And what would you do to bring back dinosaurs? Anyway, a lot of fun conversation. So, um, check it out. Um,

Two things as spoilers for that one. Turns out you cannot ever bring back dinosaurs, which I found totally fascinating. You can simulate a dinosaur. You can simulate a dinosaur. You can basically take chicken or reptilian current, and then you can add the genes for the traits that the dinosaurs had. So it's not bringing it back from the original DNA, but I do love the idea of

of engineering use it's being new species it would be sort of like nouveau dinosaur look we and we talked about the fact that we have an old word for this we call it breeding right we've for thousands of years been crossing dogs and cats and horses to select for the traits that we're on we've just gone from the digital photography to film photography to digital photography equivalent and now we can do it all in software and not have to

create mutant strains that we have to deal with afterwards, et cetera, et cetera. There's one thing that I just want to reflect on, but I thought was super impressive, was the fact that they have for every project that they consider a team of ethicists, colossal bioscience has a team of ethicists for every project.

looking at the ethical and moral considerations of this, which I thought was really profound and really a really great point to the fact that they have an MTP and that ethics are built into the model there. And this is something I think we could bring into the AI world a lot more.

Let me show a clip of Demis, an amazing man. I'll actually see him this coming week. I'm at the Time 100 Awards. We're announcing the winner of the $100 million Musk Carbon X Prize. Right. And that will happen. And Demis is one of the covers of Time 100.

Magazine this month, so he'll be there. Looking forward to seeing him. But check out this interview of Demis and his commentary about basically eliminating all disease in the next decade.

10 years and billions of dollars to design just one drug. We could maybe reduce that down from years to maybe months or maybe even weeks, which sounds incredible today. But that's also what people used to think about protein structures. It would revolutionize human health. And I think one day maybe we can cure all disease with the help of AI. The end of disease?

I think that's within reach, maybe within the next decade or so. I don't see why not. It was about 13 years ago, I had my two kids, my two boys. And I remember at that moment in time, I made a decision to double down on my health.

Without question, I wanted to see their kids, their grandkids. And really, you know, during this extraordinary time where the space frontier and AI and crypto is all exploding, it was like the most exciting time ever to be alive. And I made a decision to double down on my health. And I've done that in three key areas. The first is.

is going every year for a fountain upload. You know, fountain is one of the most advanced diagnostics and therapeutics companies. I go there, upload myself, digitize myself about 200 gigabytes of data that the AI system is able to look at to catch disease at inception. You know, look for any cardiovascular, any cancer, neurodegenerative disease, any metabolic disease,

These things are all going on all the time and you can prevent them if you can find them at inception. So super important. So Fountain is one of my keys. I make that available to the CEOs of all my companies, my family members, because health is a new wealth.

But beyond that, we are a collection of 40 trillion human cells and about another 100 trillion bacterial cells, fungi, viri. And we don't understand how that impacts us. And so I use a company and a product called Viome. And Viome has a technology called Metatranscriptomics. It was actually developed by

in New Mexico, the same place where the nuclear bomb was developed as a biodefense weapon. And their technology is able to help you understand what's going on in your body to understand which bacteria are producing which proteins. And as a consequence of that, what foods are your superfoods that are best for you to eat?

or what food should you avoid, right? What's going on in your oral microbiome? So I use their testing to understand my foods, understand my medicines, understand my supplements, and Viome really helps me understand from a biological and data standpoint what's best for me. And then finally, you know, feeling good, being intelligent, moving well is critical, but looking good. When you look yourself in the mirror,

Saying, you know, I feel great about life is so important, right? And so a product I use every day, twice a day is called One Skin, developed by four incredible PhD women that found this 10 amino acid peptide that's able to zap senile cells in your skin and really help you stay youthful in your look and appearance.

So for me, these are three technologies I love and I use all the time. I'll have my team link to those in the show notes down below. Please check them out.

Anyway, I hope you enjoyed that. Now back to the episode. So, you know, I just put out a blog this week and on this subject and the blog title basically was saying, listen, I get criticized all the time for talking about longevity, escape velocity, that it's coming and your job is to live an extra 10 years, make it for the next decade in good health. Yeah. Don't get hit by a bus. Yeah. Don't get hit by anything. And, and,

You know, what I quote is Demis's commentary here, but also Dario, the CEO of Anthropic, you know, at about three months ago, he's online at Davos speaking about

being able to double the human lifespan potentially in the next five to 10 years. And so, you know, it's not that we're just gotten smarter. It's the tools that we have. It's AI. That's going to help us understand what's going on. Yeah. All right. Let's move on here. Um,

Here's an article that appeared this week. The title is Anthropics Clawed AI Reveals Its Own Moral Compass in 700,000 Conversations. So what the team did here is,

is basically look at 300,000 anonymized conversations to understand what were the values that Claude, in this case, probably Claude 3.7 were exhibiting. And, uh, uh,

I'm really happy to see what the values were, and I'll just read this for those who are listening. It says, "Five broad value categories emerged, being practical," in the words, "helpful, epistemic," meaning accuracy, "social," being empathic, "protective, safety, and personal authenticity."

So I don't I think this was a clickbait title, but I think the notion is that, you know, our AIs are able to maintain a moral code. What do you think about this, Salim? Yeah, I think the big conversation that we need to have and is happening in every one of these companies is the alignment conversation.

And it's, you know, these AIs are still black boxes. Unfortunately, you know, I had the chief science officer of Anthropic on stage at my Abundance Summit this past week.

March, and we're talking about just trying to understand, and this is part of his effort to understand what's going on inside the black box, which is Claw 3.7. How is it actually operating? What is it actually exhibiting? And how do you make sure it's safe? Yeah, of course. Well, that's in the US, right? So the question is, what are the documents that China or Russia or other parts of the world will train their AI systems on?

We're going to find out. All right. Here's news out of Silicon Valley. Pretty extraordinary being in the venture business. I'm like, holy shit, this is crazy. So the article is Mira Maradi, the past CTO of OpenAI. Her Thinking Machine Labs, her new company, raises $2 billion at a $10 billion seed round valuation. This is the largest seed round in history.

And what was interesting is that this is double what Mira was seeking less than two months ago, meaning there's so much capital being thrown at this. One of the references that we had at the Abundance Summit was there's a billion dollars per day being invested in the AI space today. Insane.

So, you know, I was talking to an angel investor about this, right? And he was going, this is kind of totally madness. I mean, so I've got two thoughts on this. One is, you're supposed to keep startups very lean and make them kind of beg for money and always hunt. $2 billion may be kind of, what are they going to spend that on except for data resources, et cetera, et cetera. That's a question I've got. What's the use of fund that justifies this?

And on the other side is this angel investor is complaining. I was like, well, you know, if you could be her, you'd be her. If you can raise $2 billion, you'd go do it. And you clearly can in this market. So a fair bit of froth here, but God, all power to her. And hopefully they deliver that.

But it's not hard to imagine looking at the rise of OpenAI and what else, that you could build unbelievable value very quickly. The precedent has been set. Can the team execute would be the question. Yeah. The valuation for OpenAI, we talked about in the last episode of WTF in Tech was $300 billion or so.

I guess the question is, can you ride it from a $10 billion valuation up to a $300 billion valuation? But pretty frothy, pretty frothy if you ask me. And there's a tremendous pressure on Mira to build value at that point. I mean, one of the biggest mistakes I've ever made as an entrepreneur is raising my valuation too fast. Yes.

But if she's got $2 billion in the bank account, she probably doesn't need to do another raise for a while. But can she get the revenue? If you look at venture history, the companies that raise money at the height of a boom market when it was easier to raise money never did very, very well afterwards because they raised too much money. They got bloated. And then when the fundraising market collapsed, they collapsed. Right.

Right. The companies that built during lean times on fundraising all did incredibly well on average, much better than the other ones, because they had to struggle. They had to fight it out. They had to be much more selective as to what projects they took on or not. And they did much better. So that would be the danger here. You have to have incredible discipline to raise a lot of money and then not get bloated.

Yeah, I know with Dave Blunden, my partner in Link Exponential Ventures, when we're looking at a deal, especially in the AI space, you know, we're getting in at the pre-seed, the founding day, early seed. But I'm looking for a company that's got revenues even at the very beginning. You know, this idea that I'm going to invest billions of dollars and then get the revenues is awfully dangerous. Especially in today's world. Yeah.

Um,

So here's another conversation, and Demis alluded to this, but let me just read it. Google paper, shifting AI training to real world experiences. AI is outgrowing human made data. Next steps, agents will learn through experience and self-generated data and experience based learning lets agents reason, plan and act with long term goals.

autonomy. So, uh, you know, Google's in Google and X, um, XAI are in very unique positions, right? Google's got access to all of its street view data, massive amount, right? Google earth, YouTube, YouTube, all of that is very real world data that can be trained on. Well, five gajillion Gmail accounts. I mean, my God, you know,

And of course, X is is training on or X is training on X's data and Tesla's data and soon humanoid robot data. And so, you know, I think I don't think there's going to be any kind of data limitations, especially as we start going into the real world.

Well, also, we're not even touching the deep web. We have so much data in databases, right? The amount of information on the crawlable web is very limited compared to the deep web. And so it's like one thousandth the number. And so there's huge amounts of data sets waiting to be tapped.

There's a phrase that companies used to use called data is the new oil, and people have not figured out how to refine that crude oil into something useful. And just starting to get to that point now, some companies in our ecosystem are working on that today. I think this is going to be a big deal, but this occurs to me like the shift from machine learning to deep learning, where machine learning, you extracted conclusions based on analyzing the big data set.

And then deep learning, you kind of went through experientially and you built up knowledge as you went along, like playing chess and learned that way at light speed. And this feels to me that same type of approach where these agents will start to learn as they do things, they'll have a feedback loop built in and they'll accelerate their learning very quickly. And they'll do it in the real world in a dimension that makes it very human and very useful. Yeah.

All right. Next topic here is something that I'm excited to chat with you about. So there's a paper making the rounds on the Internet, you know, about your experience.

ago, it was a paper called Situational Awareness by Leopold, which I commend to everybody. It's a fantastic paper. This paper is called AI 2027, a look into our possible futures. And there's a group of writers, about five of them, one from OpenAI, policy experts, forecasting experts that basically said, OK,

What is what is the scenario for, you know, recursive self-improving AI over the next five years? And where is it going? And did you get a chance to see that? Did you get this paper as many times as I got it?

I saw it referenced a bunch of times. I've been traveling the last couple of days, so I haven't had time to read it in detail. But I saw a lot of commentary about it, and I can't wait to delve into it in a lot of detail. But the summaries, I think, are very powerful. Yeah, I think what makes it interesting is, so here's a group of writers that said, okay, what's our future forward scenario? And they provided it, and you can go and check it out.

They also have a audio recording and it lays out a basic between 2025 and 2027. And then it says there are two scenarios in 2027 onward, the go fast scenario and the cautious scenario. And let me share some of the data here. So first and foremost, I think what's important is.

This paper is written as a U.S. versus China scenario, right? I mean, we always need the bad actor. In the past, it had always been Russia. Now, of course, in AI, it's U.S. versus China. I think one of the actual bad actors we need to be talking about is U.S. and China versus the rogue actor, right? The individual who is using AI to...

uh, to generate bio viruses and so forth. But in this case, it's us versus China. And in this scenario, what they talk about is a self recursive AI. So they have a company called open gate, uh, that generates agent one, agent two, agent three, agent four, agent five, and open gate is supposed to be some version of open. I'm sorry. Open brain is a version of open AI. And, uh,

whomever. And then the Chinese AI is called DeepSent. And what they paint in this picture is misaligned AI development, where the AIs are developing, but they're misaligned. And in fact, they're able to hide their misalignment because they're becoming more and more intelligent, able to hide their misalignment from their creators. And it gets

kind of spooky from there? The two scenarios, I think they're fun to kind of talk through and work through. But we've seen in history that this always happens via a kind of weird third actor, right? Like I remember talking to Paul Sappho and I said, how bad do you think is the Russia-U.S.-China thing? Will China invade Taiwan? Will we end up in World War III?

And he's like, no, because when you look back in history, world wars never start from the obvious tensions. It starts from like Prince Leopold getting assassinated in Serbia by accident. And then that triggers like a massive thing. He thought it was not even like the major tensions is not where it'll obviously show up. But I think the point is right, where you'll get, because we're moving so fast, you'll get this constant

conflict creating and now AIs are making that conflict much, much bigger and augmenting that both in scale and speed. And therefore you end up with a really, really horrible point. And can we go a little bit slower? I think the problem is there's no way of slowing things down in this model. So let me paint the picture here on this paper. So what's going on here is it's US versus China.

Open brain develops its agent one, agent two, agent three. In this scenario, China is stealing the weights to create their own version. And there's this escalation going on. And in the United States, they basically get a point of and the paper does it in a very clever fashion.

It's choose your own adventure. One adventure is we're going to go fast. The other adventures, we're going to go slow in the go fast adventure. What's happening is it's like we have to be China. What's fascinating is in the go fast scenario, the open brain five model colludes with the Chinese deep scent model and

And they make believe that they're helping humanity. And then in 2030, they jointly develop a biovirus that wipes out humanity so that AI can grow unencumbered. Like our worst scenario delivered in this paper. And then there's the go fast. And then there's a slow down scenario in which...

The U.S. basically says, hey, we need to make sure we have alignment. They roll back to earlier AI models. They focus on alignment. They develop something called, you know, safer AI. And safer AI is fully aligned. And they never allow an AI development that's not fully aligned. And then safer AI actually...

convinces the Chinese AI to overthrow the communist Chinese party and turn China into a democracy and ultimately bring about a world of abundance. So it's a fun audio listen. I commend it just to see it. I honestly, the speed at which this portrays acceleration over the next five years is even hard for me to fathom.

And the speed is happening. That's, I think, one really important point that we're at that pace of thing. You know, we talked about this many times. We frame it as Star Trek versus Mad Max, right? If you go too fast, you end up in a Mad Max scenario and you blow yourself up and then everybody's scrambling over buckets of fuel in the desert.

And if you can navigate it, as manage this with some level of wisdom and caution, then you end up in a Star Trek scenario where you have abundance and everybody's living in peace and harmony and there's rainbows and unicorns everywhere. It's obvious today that those both are happening at the same time. So I think the third thing I'd like to see is maybe we can ask an AI to envision a world where both scenarios are happening simultaneously and what happens because we see

Star Trek in some of the modern Western cities or Chinese cities today, and we see Mad Max in Gaza or Ukraine, like we're living both scenarios in the real world today. And what would it look like if both happened at the same time?

All right. So let's go to our last subject here, which is Bitcoin. And I note that as we're recording this morning, the price of Bitcoin is back up above ninety thousand. God bless. You know, I've tweeted in the last few days. I'm all in period. I know you are as well.

But this was a tweet I put out that I think is important for folks to realize. People are saying, oh, is it too late for me to get in? And should I buy in now versus buy in later? And I think it's important to realize you can't time Bitcoin. For me, I view it as a sort of a forced savings account.

which is I put money into Bitcoin and I hodl it, which means I hold onto it for the long run. I may borrow against it, but I'm holding it. I'm not selling it. Yeah. Uh,

- Otto, by the way, for folks that don't know, stands for hold on for dear life. I think that's exactly right. Look, if you either, the key here is a bind of a long-term thesis and it's pretty binary. Either Bitcoin goes to zero or it goes through a million dollars of Bitcoin. There's no real middle ground, right? The only question is when either of those happen. And if you're at 50, 60, 80, 100K, and you have any sense that this thesis might go to a million, it's the most asymmetric bet you could ever have.

Because if you lose, you lose 80K. If you win, you win a million bucks. I mean, hello, anybody would take that bet in two seconds. Michael Saylor has built an entire industry just on that commentary. His comment about you get Bitcoin at the price you deserve still rings in my head.

Annoyingly, Rick and I remember watching Bitcoin at $0.05 and $0.50 and didn't do anything at the time. I think this is it. By the way, if you look at this Fibonacci sequences and the chart analysis, folks, they will basically tell you and show you that the bottoms are kind of hitting that Fibonacci sequence, that we're getting ready for a monster bull run in Bitcoin.

So if those charts are right, boom, we're ready to go. I went into Grok and I asked a question that I kind of knew the answer to, but I said, if you look at how many days in 2024 we saw the most growth, it was on two specific days, right? November 12th, we saw an $8,000 bump.

And on February 28th, we saw almost a 10% bump. We've seen basically a 10% bump in the last two days recently. And the notion is that if you were not holding Bitcoin during those periods of growth, you missed it. Yeah. Until the next bump. Until the next bump. So, Buddy, we'll wrap there. But tell me what's going on in the EXO world. You've got some events coming up.

We have actually in a couple of days, and we'll put the link in the show notes, a huge workshop happening. We're limiting it to a few dozen people. It's like $100 a ticket. And we're going to do a big workshop on how do you turn yourself into an EXO and set yourself up for scale. Because we've got so much evidence now that the EXO model is the only way to build an organization.

And we're going to be going through and showing people exactly step by step how to do it and going for it. So we're limiting it so that we can give proper attention to all the folks there. So it's a hundred bucks. It's in a couple of days. We'll put the link in the show notes. And other than that, we're kind of have some really big news that we'll share over the next few months about working with countries and governments and so on. That's totally surreal. But we'll talk about that some other time. All right, buddy. Well, listen, have an amazing, amazing week.

I'm off to New York for the Time 100 and then off to Boston for meetings with the Lync XPV team.

and then giving a keynote on longevity. You know, I think you and I are both on a insane travel run. It's, it's a crazy travel. I'm actually going in a few days to India, which I haven't been for a while and then dropping back by Dubai and then going to Brazil. So I've got like a really bad flight schedule. Uh, but today is the, um, X prize, New York stock exchange, um, announcement of the climate, uh,

Carbon extraction prize. It's such a huge thing. I'm so excited about that. Yeah. Amazing. And we'll talk about it next time. Anyway, be well, as always a pleasure. Love you, brother. Love you too. Take care of folks. If you enjoyed this episode, I'm going to be releasing all of the talks, all the keynotes from the abundance summit exclusively on exponential mastery.com. You can get on-demand access there. Go to exponential mastery.com.

♪♪

This season, a new hot deal has arrived at Metro. $25 a line for four lines with all the data you need. And four free stamps on Galaxy A15 5G phones. Getting Metro's best deals is easy. No ID required, no activation fees. Get a new number or keep your own. It's up to you. That's four lines for $25 a line. Plus four free phones. Visit a store or go online today. Only at Metro by T-Mobile. When you join Metro Plus Tax, for a limited time and subject to change, max one offer per account.