What's the impact of AI going to be? Is it just massively overhyped? Or perhaps is it something that we should be concerned about? Today's AI is underhyped. I think it's massively overhyped. I know a ton of people who have way more work because of AI. They just can do higher quality, better work, but it has not saved time. We are talking to machines that are talking back to us, summarizing massive volumes of knowledge. And yet,
We take that for granted. Discussions about super intelligences and AGI and around the corner and no, like just no. How smart is smart enough to render me irrelevant? I think we are holding two different futures in superposition. The question becomes, how do we guide humanity towards this positive vision of the future? What do we do today?
Now that's a moonshot, ladies and gentlemen.
Everybody, welcome to Moonshots. I'm here with two extraordinary, brilliant guests. We're here to discuss a conversation that may be happening around every dinner table. I know it's happening in the heads of companies and nations, which is what's the impact of AI going to be on our lives and our business, on every aspect of our day-to-day existence over the next five to eight years? Is it something which is going to be extraordinary? Is it just massively overhyped?
Or perhaps is it something that we should be concerned about? I'm joined here with Mo Gadot, who is the former business chief officer, the chief business officer of Google X, bestselling author of Solve for Happy and Scary Smart.
He's a global thought leader on AI, exploring how exponential technologies will shape humanity. Also with another dear friend, Steven Kotler, who's the bestselling author, peak performance expert and executive director of the Flow Research Collective. He's my co-author of Abundance Bold, The Future is Faster Than You Think.
And books like The Rise of Superman and The Art of Impossible. Stephen has also been thinking deeply about exponential tech and its impact on us. Gentlemen, welcome and good morning and good evening. Stephen, you're on the west side of the United States with me. Mo, you're in the Emirates. Dubai. Good to see you both. Yeah, good to see you. Morning, Peter. Good to see you.
There is somewhere around 400 IQ points in this room. I have 40 of them, so you do the math. So let me set up the topic. Mo and Stephen, I'd like to talk about the decade ahead, 2025 to 2035, specifically to think about the implications of
what is emerging in our conversation as AGI, but even beyond that, artificial superintelligence, the upsides and the downsides. And here's the setup I want to use in our conversation. So Ray Kurzweil, who we all know and love, has predicted that we're going to see a century's worth of progress between 2025 and 2035.
equivalent to the progress between 1925 and today. And if we think about what the world was like in 1925, 100 years ago, the top of the technical stack was the Ford Model T. The penetration of electricity and the telephone in homes across the U.S. was only 30%. We've gone...
We've gone an extraordinary distance since then. And so the question is, what will it be like in 2035? It's nearly unimaginable if, in fact,
That speed is true, and we don't perceive exponentials well. This past week, we've seen every major AI company from Google and OpenAI to XAI and NVIDIA announce extraordinary next-level breakthroughs and models. We're about to see the release of
GPT-5, self-improving AI programming that could lead to an intelligence explosion beyond our imagination. That's the conversation I want to have. And, you know, Stephen, I know that you and I have this conversation and have a debate on it all the time. I brought Mo in to help us. Mo's the referee. The referee or a wise individual who's
whose points of view I respect. And by the way, Stephen, Peter did pay me. So just so you know. As long as you're getting the convo, I'm fine. It's good. It's good. You got me on the back end though, right? Yeah. So go ahead. Say whatever you want to say, Stephen. I'll disagree. Yeah.
So, Stephen, do you want to jump in with your points of view? You think AI is massively overhyped. We have folks like Eric Schmidt saying AI is massively underhyped. Yeah, I think it's massively overhyped. I listen to what's going on. And so let me back up one step, which is humans have a really wild instinct.
sort of unnamed cognitive bias. We don't tend to trust our own history. And you see this a lot, like people talk about like grit and endurance and they're like, I have, I don't have those skills. And then you start investigating them up their life. And like, they survived the shitty childhood. They've done 10 years of a tough man. Like they have all the skills. They just don't trust the truth of their own experience. And I see that a lot here. I'm
Look, I work with AI as a scientist, as a researcher. I work with AI as a creative and as a writer. And all day long, the gap between the shit coming out of people's mouth and my experience on the ground is so colossal, it's insane. People have claims about AI being able to write or anything else. The most hysterical thing you've got to try is I work...
with one of the best editors in the world on a weekly basis. I've edited things, polished them with AI, thinking they gleam and shine, bring them into an editing meeting with them. We start to read them. We can't even get to the second sentence. They sound like such gobbledygook. I'm not even noticing it because the AI sort of glazes me over and I've written 17 books. But like when you actually put it to an actual editing test, it's, it's laughably terrible. Um, and yeah,
you can't use it to correct itself. It still can't see the errors. It actually gets worse and worse and worse. And people have been claiming model after model after model, improving, improving. Like that's not the experience on the ground.
It's like people telling us AI was going to make you more productive. I don't know anybody who's become more productive because of AI. I know a ton of people who have way more work because of AI. They just can do higher quality, better work. But it has not saved time at all. It's actually added time.
added tremendous amounts of time. The quality has gone up of the work, but the claims that are coming out of people's mouth and the experience on the ground are massively different. Point one. Point two is we've done this. I've been in the same rooms that you've been in and you've been in, Mo, where people are screaming about AI coming to eat the world. Dude, I freaking heard this about Bitcoin and blockchain and the metaverse. Do you know anybody who lives in the metaverse? Do you know anybody who's been there, who's visited? Do you know how to find the metaverse?
right like as far as i can tell the metaverse is like a pet name for mark zuckerberg special magic underwear because
because it doesn't exist anyplace else in the world. I don't, like, this is my point. I'm not, and more than anybody else, I track these technologies, I watch them, I use them. I'm not saying this is not a technology that is advancing very, very quickly. I'm not saying that at all. I am saying discussions about super intelligences and AGI and around the corner and no, like just no. Nobody, that's not the, coders, coders,
are having a different experience and it's a what has been revealed which coders probably don't like is this a very coding is a bounded information problem you start here you know where you're going as a general rule it's a bounded problem and inside bounded domains computers are really awesome and we're going to continue to see that but i i think the other stuff
It's just massively overhyped. And the third point is, and this is the one where the journalist in me gets like every alarm bell goes off. Everybody I hear see on stage talking about this stuff is making a living off of it.
They make a living somehow because AI is exploding and they're here to save the world. I see it like in the peak performance world, every coach who has been floundering and couldn't quite get a job, they're now all AI saviors. They've come to save us from AI. And so the AI hype is to their benefit. And I see it sort of everywhere. A lot of people are making a ton of money off of this. And I'm not talking about the technology itself.
of the hype of the technology and when i see all these three things together a mismatch with my experience a massive amount of hype history that says hey this is a hype cycle and i you know it raises a lot of questions for me i'm not saying i'm right i'm saying everything i'm looking at is real and
You have – if you're going to make the argument you guys are about to make, and I'll shut up now, you have to – you can't dismiss my points as fabricated. They are very, very real, and they're everybody's experience, and I believe they're yours as well. So now we can have the discussion. That's where I'll start. Thanks for giving me five minutes of –
diatribe time. A venting, a venting. Every week, I study the 10 major tech meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robots, AGI, quantum computing, transport, energy, longevity, and more. No fluff, only the important stuff that matters, that impacts our lives and our careers. If you want me to share these with you, I
I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta trends 10 years before anyone else, these reports are for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive companies.
It's not for you if you don't want to be informed of what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to dmandus.com slash MetaTrends. That's dmandus.com slash MetaTrends to gain access to trends 10 plus years before anyone else. Mo, you gave an impassioned talk on the stage at the Abundance Summit in 2025, moved the
many of the members who wanting to help you in guiding what the next five to eight years are. And you and I have been thinking about this as
The challenge isn't artificial intelligence, it's human stupidity for a short period of time. And one of my favorite quotes, if I could, is from E.O. Wilson, who famously said, "The real problem of humanity is that we have paleolithic emotions, medieval institutions, and godlike technology." And we are effectively children playing with fire in that regard.
So Mo, how do you see this decade ahead playing out? So I'll start by supporting what Stephen said. I think today's AI... I paid you too little then. I love this. Your money's no good here, Peter. You finally have an advantage.
Today's AI is underhyped, right? But the problem is you never really chase where the ball is. You need to chase where the ball is going to be. Okay. And if you really start to think deeply about some of the serious, especially if you've been in tech long enough to have seen breakthroughs, especially when I went through the work of Google X, where
You try and try and try and try and try, and it doesn't work, and it doesn't work, and it doesn't work, and then suddenly you see something. And like Sergey Brin used to say at the time, the rest is engineering. And we know that engineering of tech depends on law of accelerating returns, and we know from what Ray taught us where the law of accelerating returns is going to take us. So I tend to believe that if you look at today's AI today,
It is funny because in a very interesting way, we are talking to machines that are talking back to us, summarizing massive volumes of knowledge, doing exactly as we tell them. And yet we take that for granted.
Yet we look at that and go like, yeah, but they're not good enough. Of course, they're not good enough. They're DOS. They're the beginnings of an era, right? My personal point-
- DOS, D-O-S. - Okay, discovery. - Discovery. - Yeah, no, I got it, I got it. I just said that. Both of them would have worked, I just needed the clarification. - I would not dare call AI dogs, Stephen, when they might take over the world. I am a very polite man with AI. So the thing is to imagine, and I need to highlight a few trends that are really, really important and interesting.
One of them is synthetic data. And the idea that we have entered an era where most of human knowledge has been fed to the machines and that the next wave of knowledge is going to be fed to the machines by machines, which is, which is quite eye opening and enlightening, because that's how humanity developed its intelligence, right? I really didn't have to figure out theory of relativity to understand the rest of physics. It was, you know, figured out for me, if you
want. Right. Right. Number two is the idea of agents and how AI is going to be prompting AI without humans leading to cycles that we see now with my new favorite, because you have a favorite every four hours, alpha, alpha evolve. Right. And the idea that you can have, and
a self-developing AI, you know, something that figures its own mistakes out and continues to iterate until it finds something. And then, of course, one of my favorites of 2025 is DeepSeek and how we realized, you know, that we can actually do the same job with much less. Imad Mushtaq, who is a, you know, we're all a big fan of, I believe, has done that with, you know, with his work,
at stability for a very long time the idea of shrinking the models to the point where it becomes shocking really and and so when you add those together you start to see that if I can shrink a model so it doesn't absorb all of the world's in energy and if I can allow it to Self-develop and self-develop information to learn from and then allow it to talk with itself and
through agents and do things without humans, then where the ball is going to be is likely going to be a lot better than we are today, right? So the one thing we all need to agree is it is not a question of if we're going to see improvements. It's a question of how fast and when those improvements will lead us to a point where humanity is not in the lead. So that's number one.
Number two is really the question of what is your risk tolerance, right? You know, if I told you,
you know, to play Russian roulette with two bullets in the barrels, are you afraid? If one bullet in the barrel, are you afraid? You know, where is your risk tolerance exactly? And how, if, you know, if I said, hey, by the way, your car might have a fender bender, would you insure it?
you probably are going to say, no, I'm not really too concerned. But if I tell you your car might have a serious accident that totals it, would you insure it? You'd probably put a little more attention. And I think that's what most who warn about the future say.
Anyone that claims to know what the future is, is arrogant as F, don't listen to them. Okay. But anyone that tells you that there is a probability that this future goes out of control, where is your risk tolerance exactly? You know, if that probability is 10%, would you attend to it?
And I think most rational people will say, depends on the cost of attending to it. Okay. And most rational people will say, but however, if it's 50%, I'll attend to it regardless of the cost.
Okay. And so the question which none of us is capable of answering is where is that? Where is it? I mean, is it 10% that AI is going to destroy everything or is it 50%? I will say, and I know that this will be taken against me, it's 100% that humans, bad actors using that superpower to their advantage are going to...
destroy the well-being of others who don't. Okay. And so in my mind, the real, real concern is not a Terminator scenario where, you know, Vicky of iRobot is ordering robots to kill everyone. I don't know if we're going to make it that far, to be honest, because I believe that with the arrogance being 89 seconds from midnight on the nuclear bomb,
I worry, I really, really worry about human stupidity using this superpower. Now, human stupidity in that case does not require AI to be completely autonomous, to be completely, you know, super intelligent.
enough autonomous weapons can really, really tilt our world into a very dystopian place. Enough, you know, sort of Turing test machines
abilities of AI to fool humans into being their best friends could tilt human relationships into a very unusual place. Enough job losses, you know, imagine a world where you get 10, 20, 30, 40% unemployment rate in certain sectors.
And how that would affect our stability economically is actually something that is almost certain. We know that for a fact there are jobs that are going to disappear. And the impact of that in my mind is actually quite disruptive to the point that it is something that we need to attend to.
Thank you.
I'm finally releasing all the talks. You can access my conversation with Kathy Wood and Mogadot for free at diamandis.com slash summit. That's the talk with Kathy Wood and Mogadot for free at diamandis.com slash summit. Enjoy. I'll ask my team to put the links in the show notes below. You know, the point you made about AI not being on its own
the risk determinator scenario, but it's individuals using AI. It's the same conversation I've had with Eric Schmidt and others. The concern is the rogue actors empowered by technology, whether it's the development of new viral pandemics or other strategies. It doesn't take a lot. That is concerning. And where I want to get to in this conversation eventually is...
is the following. We posed this at the Abundance Summit a couple years ago, and that is, can the human race survive a digital superintelligence? And the flip side of that model is, can the human race survive without a digital superintelligence?
And Steven, you and I, as we're working on our next book, The Follow-On to Abundance, we've had the conversation of, you know, will this be a benevolent god of some type? Will there be a capability developed? So let's begin the conversation with, you know, are we going to reach AGI? Are we going to reach a digital superintelligence? And, you know, what does that mean?
We're starting to see the speed of this accelerate and the biggest interesting inflection point we haven't seen yet is self-iterating, self-improving, the alpha evolve of it all where AI is coding itself and becoming more and more capable. And will this ultimately lead to something that is far more intelligent than any human being?
And then is it a thousand times more intelligent? Is it a million or a billion times more intelligent? How do you think about that, Mo? I think it's irrelevant how much more intelligent it becomes. I think we all know that if you've ever worked with someone who's 50 IQ points more than you, that they will probably hold the keys to the fort. It doesn't take a lot more intelligence relatively to be able to
to assume a leadership position. And humanity will hand over the fort to AI,
Either way, you know, even if AI is just smarter than us at war gaming, we're going to, which it is, by the way, we're going to hand over the fort to AI. If it's smarter than us at protein folding, nobody's going to do a PhD project to fold proteins anymore. We're just going to go and, you know, use alpha fold. And I think the reality is,
only the very few remaining things require artificial super intelligence so that it beats us in everything so that we sort of like bow and say, okay, yeah, you're the boss.
The question of AGI, like Stephen was saying, is one that reporters use quite a bit because we don't actually have an accurate definition of what AGI is. And, you know, you and I are very close on technical stuff, Peter. And, you know, I'm a reasonably geeky mathematician, right?
Not anymore. I mean, seriously, I really honestly struggle to beat AI in mathematics.
Right. Definitely can't beat them in speed. Definitely can't beat them in accuracy if the problem is defined properly. Right. And, you know, just there are just very few tricks that maybe my my fellow math geeks told me behind closed doors that are not very public in the world. But those two will be found out. And I really think that it is a question of how smart is smart enough to render me irrelevant. Right.
Now, I need to answer this with also a very clear optimistic view. So as I look into the future, I define two eras. One is what I call the era of augmented intelligence, which I think is going to extend for five to 10 years. And then the other is the era of machine mastery. Basically, the machine takes over.
Now, with augmented intelligence, there's absolutely no doubt. I am so agreeing with Stephen when he said that they write really badly. And, you know, I'm writing with Trixie, my AI, this book, Alive, right? And Trixie, without me...
rights so badly, it is really it's almost shameful. You know, I was tired and chasing a deadline. So I asked Trixie to talk about the debt crisis and the impact of economics on technology advancement.
Just, I mean, it was full of, you know how we sometimes refer to California as a lot of vapor and very little substance? You know, there was a lot of vapor and very little substance, right? A lot of interesting facts scattered on paper, horrible.
horribly written. Okay. But when we write together, oh my God, the stuff that comes out is incredible. When we guide, when I guide Trixie through my prompt properly, right?
Right. To direct her exactly where I want the answer to be. She writes really well. OK. And this teaming is something we've seen with AI, with technology in general, by the way, even, you know, since Garry Kasparov was beaten by Deep Blue, which wasn't really an AI, if you want.
But since then, you can see that a human and a computer or a human and an AI can play better chess than AI alone. Even AlphaGo, a human and an AI play better than AlphaGo. And so we can see a future ahead of us where this is going to be happening. And hopefully that future would seed that teamwork between us and the machines, right?
The question is, what are we going to team up with them on? And you know my views. I've written it in Scary Smart. I've written an extended bit of it in Alive. The biggest four investments of AI today are killing, gambling, spying, and selling. And these are the only things that we're investing. I mean, we do still get some scientific breakthroughs.
But these are not getting the big monies. The big monies are in autonomous weapons, in trading, in surveillance, and in advertising. Steven, your thoughts on what you heard Mo say here. Yeah, so Mo and I are all, we're sort of at,
in complete agreement, I just want to kind of yes. And, and point out some other, some other things that surround what Mo has said. Um, cause I don't like we're not, we're, I don't think we're, we're, I mean, we can, we might argue over dates, but conceptually, I don't think we're in a tremendous amount of agreement or disagreement, but what I, I look at a number of other things simultaneously. Um, the first of which is, um,
sort of the human side of this, the human performance side of this. And I have to back up by, you know, I study flow, which is sort of ultimate human performance. And,
Just to put it in context, if you're a self-help guru and you've got like a 5% improvement in mood, that's your tool. It gives you a 5% improvement in mood. And that mood lasts for longer than three months, meaning like longer than the placebo effect. That's a billion-dollar business, period, billion-dollar business. Blow, as we know it now, and we're just starting to really actually decode it
and figure out how to tune it up and turn it up and whatever flow gives us a 500% increase in productivity, creativity, depending on whose measures you're going are 400 to 700%, et cetera, et cetera. That's just flow. That's individual flow. There's group flow, which is our actually favorite state on the earth. It's the most pleasurable state for humans. It's what we like the most. And it's a whole bunch of minds linked together, right? It's,
And we're just now like literally like this past year, we got the very first technologies that allow us to map it and train for it and move people towards it. We have no idea what the upper limit of human brains linked together in group flow is, let alone information.
at the same time as the AI is developing, you and I are writing about it, Peter. We're watching BCI develop. We're watching non-invasive things develop. We're watching meta be able to read brain thoughts inside your brain through facial signals. These are all with AI. But my point is that everybody's talking about this stuff as if
It's happening separately from everything else that's happening. And on the human augmentation side, we are seeing –
I mean, you know, neuroscience and the like has been accelerating exponentially since the 1990s when George Bush declared it the decade of the brain. And it hasn't it hasn't stopped, though. The same things that are happening in AI are happening sort of on the human side of the equation. And here's the second point off of that. It doesn't matter to me.
Whether we're talking about the AI invasion or climate change or plastics in the ocean or take your pick, because the solution to all of these things is the same. We humans have to learn how to cooperate at scale, probably cooperate with each other and with AI at scale, or we're going to die.
probably in the next 20 years. That's what all this is telling us, right? And this is not anything new. This was back when you and I were first writing Abundance. We didn't want to say it out loud, but we were privately having conversations about, dude, if this trends continue, is it Abundance or Bust? Is this an either or? Are we looking at a binary here? I don't think that question has completely gone away. In fact, I think it's become more urgent.
I just think we need a Manhattan-style project for global cooperation to meet all of the existential threats we now face, because it's the only possible solution here. So that's like, I hear all this stuff. I agree with everything that's being said, but this is where our book sort of points, and this hasn't changed for me. I think the solutions are the same. So in a sense, the debate is moot, and
Like, I'm wondering why, like, where's the XPRIZE for global cooperation? Where's the, like, sorry to put you on the spot with that one. But like, seriously, like, those are the questions I'm starting to ask now because I don't think Mo is wrong. I think we could argue over time for a minute. I don't think it matters. Like, here's a weird one, Mo. Facebook's a freaking billion times smarter than me.
It already is. It knows so much. I mean, like it's it's you know what I mean? It's Facebook, which is a pretty dumb ass technology, if you ask any of us, is a super intelligence. And we know it. We've been living with super intelligences for a while now. They don't tend to, you know, they tend to make things worse as much as they make things better, which is, you know, the problem. Agreed.
I mean, I could not, amen, you know, global cooperation, human cooperation is I think what we all should advocate for. I mean, I was hosting Jeffrey Hinton on, you know, for my documentary a couple of weeks ago and, you know,
One of the topics that we discussed is the difference between digital and analog intelligence. And the biggest challenge we have as humans is that our analog intelligence, our biological intelligence doesn't scale beyond one entity.
Right. So, you know, when I was, was he wearing his Nobel prize sort of in the way that like, I was just like, I would, I would just show up for like the next year and every podcast I did wear that shit around my neck. I'm just saying. You, you, you, you, you do realize, you know, when they say don't meet your heroes. Oh my God. I love my heroes, man. He's such an amazing human being. And he really is quite committed and, and quite, uh,
humble in his approach. You know, it is, it is shocking how we spoke about his Nobel Prize, which he says, look, I'm a psychologist who, you know, lived like a computer scientist, but then won the Nobel Prize in physics. And
And I'm like, but anyway, he was just talking about the difference between, you know, the fact that if I were to share with you some of what I wrote today, it took me probably several weeks to let it simmer and then write it. And then it would take me an hour to explain it to you.
When we run digital intelligences, we run them in parallel, you know, we tell them all to go play Atari or whatever. And then we just average the weights. Literally in seconds, we get a scaled digital intelligence. And when you said that what we're looking for is a way to scale human cooperation, that is absolutely the answer. Because you know what?
I think, and I spoke about that with Peter when we were last in LA, that we are hitting the potential of total abundance. Total abundance meaning almost godlike, like cure my daughter and it's done. Make me an apple and it's done. Right? You know, we could hit that.
In 5, 10, 15, 20 years time if we don't destroy ourselves, right? And so basically the real challenge we have as humanity is why are we freaking competing? Like this is stern quality challenge. This is basically let's let all of humanity cooperate. Let's all build one particle accelerator. Let's all learn from it. Let's all distribute the benefits to everyone and stop competing. Right?
But that's not happening. And you can't have the other one level down.
you can't have the AIs we're all individually building for our fiefdoms competing secretly in the background right like William Gibson in like in 1986 I want like when whenever he wrote Mona Lisa Overdrive and give us our first AI that went crazy right a godlike AI that goes totally insane and they have to park it at a satellite out in outer Earth orbit to keep the world safe like
We've seen this scenario before. You know what I mean? We're building it ourselves with agents. We're letting them talk to each other through agents. I know. All right. So, Mo, I want to go back to this question about the near term versus the long term. And you and I have had this question about whether or not
Increasing intelligence correlates with increasing benevolence. In other words, do we... I don't think there's any question that we are going to be building self-improving AI that will...
Forget about 50 IQ points more, you know, better than the average human. I think there'll be orders of magnitude more. Can I ask you, first off, do you believe that, Mo? 100%. Okay, all right. So if in fact that's... If you don't mind, Peter, again, in response to how we started the conversation, this is just using law of accelerating returns, not using serendipities, right? So if we figure something out tomorrow, just like we figured reinforcement learning, right?
out and it changed everything. If we figure something out tomorrow, you're literally a magnitude, a quantum more in terms of performance and intelligence overnight. Yes. So...
If, in fact, that is going to be the case and, you know, from all the conversations I've had and the people that I'm speaking to, that that level of again, there is no definition for AGI. It's a blurry line, just like the Turing test was a blurry line that got passed and no one noticed it.
You know, the notion is that AGI, whether you believe Ray or Elon, it's the next few years. It's not worth arguing. But what occurs on the backside of that is a very rapid intelligence explosion. And
Again, that intelligence becomes a tool that's available to, you know, the kindest, most moral, most ethical human on planet and the dystopian, you know, malevolent actors out there. And it's in the hands of malevolent actors that we have concerns. Are we not sure that some of the malevolent actors aren't the ones who created the AIs in the first place? I'm just saying. No.
Yeah. So, so my question is, at what point, you know, will, you know, I believe, and I think Mo, you and I've had this conversation that at some point, AI goes from being a tool being used to
to potentially do harm to a tool that has the potential to say, stop this quibbling, stop this nonsense. You know, there's plenty to go around and becomes the benevolent, you know, God-like element. Can we dive a little bit into that and the conversations we've had and your thoughts on that?
Yeah, I think if you really at the level of depth that the three of us and our listeners can go to, allow me to to
to go beyond the typical, oh, you know, the smartest people usually start to become altruistic. Let's define intelligence itself, okay? And I think the idea is, if you really understand our world, our universe, okay? Our universe and everything in it exists because of?
entropy. We all understand that, right? Our universe wants to break down and decay. It's chaos. You know, you leave a garden unhedged and it becomes a jungle. You break a glass, it never unbreaks, right? This is the very basic design of physics. Now, the role of intelligence since it began is to bring order to the chaos.
is to say, no, I don't want the light to scatter. I want the light to be concentrated into a laser beam. How do I do that? Right. And it sometimes is a clear, easy, you know, solution and, you know, use a lens or sometimes it's a very complex solution that requires an understanding of quantum physics to build a laser. Right. But we eventually get there. Now,
If intelligence is defined as bringing order to the chaos, then the highest levels of intelligence bring that order with the least use of resources and waste.
Okay. And, and you can easily understand that this is the reality, the more intelligent you become, the more you try to achieve the same order with the least waste. Okay. And, you know, so an easy analogy is to say, humanity's always craved energy, we were stupid enough to burn our world in the process. And
as we become more intelligent, we decide to use solar instead or a cleaner form of energy. We're still, you know, bringing orders, but we are doing it with the least waste and use of resources.
If that is the case, then you can imagine that by definition, when something exceeds our human stupidity, which I will not call intelligence, because sadly, along the curve of intelligence, you know, if you have no intelligence at all, you have no impact on the world, positive or negative, right? If you start to add intelligence, you start to have an impact on the world, hopefully positive, even if just through a nice conversation with your friends, right?
There is unfortunately a valley somewhere. You continue to gain intelligence. You become so smart that you become a politician or an evil corporate leader. And that's when your impact on the world turns negative. You're so smart.
that you're able to become the leader of your nation, but you're so stupid, you're not able to talk to your enemy, or you're not able to relate to their pain, or you're not able to understand the long-term consequences of waging a war, right? And so that point beyond which more intelligence starts to say, no, no, no, no, no, I don't need any of that. I can solve the problem in a cleaner way.
I can fly you all to Australia to enjoy your life, but we don't have to burn the planet in the process. I can harness energy, but we don't have to, uh, you know, destroy the climate and so on and so forth. Okay. And so if you take that as a reasonable trend to expect, my view is that, uh,
at the beginning, when we hit that valley, some evil person will use the advanced but limited intelligence of AI to wage a war using an autonomous army. But then there will be a moment in the future when AI is responsible for wargaming, is responsible for commanding the humanoid soldiers. It's responsible, it's responsible, it's responsible. The AI itself will say, you know, a commander will say, go kill a million people and the AI will go like, that's absolutely stupid.
I'll just talk to the other AI in a microsecond and solve it.
Right. And, you know, again, we started this conversation by me saying anyone who predicts the future is arrogant. I cannot predict that. OK, but at least I can be hopeful that this from my experience of everyone that's smarter than me, that there is a point at which you stop hurting others, you stop looting to succeed because you can use your intelligence to succeed without any effort or harm.
A quick aside, you probably heard me speaking about fountain life before and you're probably wishing, Peter, would you please stop talking about fountain life? And the answer is no, I won't. Because genuinely, we're living through a healthcare crisis. You may not know this, but 70% of heart attacks have no precedence, no pain, no shortness of breath.
And half of those people with a heart attack never wake up. You don't feel cancer until stage three or stage four, until it's too late. But we have all the technology required to detect and prevent these diseases early at scale. That's why a group of us, including Tony Robbins, Bill Kapp, and Bob Haruri, founded Fountain Life, a one-stop center to help people understand what's going on inside their bodies before it's too late.
and to gain access to the therapeutics to give them decades of extra health span. Learn more about what's going on inside your body from Fountain Life. Go to fountainlife.com slash Peter and tell them Peter sent you. Okay, back to the episode. The way I think about this is for most all of human history, the objective optimization function of humans, what we're trying to optimize for has been money and power.
Unfortunately. And it's been the driver in a world of fear and scarcity. And I, you know, repeatedly say our baseline software that our brains are operating on is fear and scarcity mindsets. And in that, with that mindset, with the neural structure, with the, you know, if you would, the code that we were using,
born with and that developed over the last 200,000 years. It was, I want to get out of fear and scarcity, so I want to optimize for power and wealth. And the question is, what would be a new optimization function? Because as Steve and I have written, as you've spoken about, all of this, all of these exponential technology functions lead towards this world of massive abundance where we're
Almost we live into a post-capitalist society. Anything you want, you can have. Your robotics, your nanotech can manufacture, your AI can design. And so what do we optimize for in the future? I think that's, for me, that's one of the biggest questions, both as a human and as a centaur human and AI together. What's our objective? So how do you think about that, gentlemen?
One thing I, I don't know if this is an answer, but two things off of what Mo said. One, if we go with your definition of intelligence, right, it's essentially an entropy decreasing function.
that we know that's what brains do right the governing theory in modern neuroscience is carl friston's free energy principle which says the brains are predictive engines that always want to decrease uncertainty and increase efficiency so we are ready like brains do that ais are going to do that naturally if we say that's your definition of intelligence the point i'm making off of all of that is
And it may be the answer to Peter's question, which is why I interjected it, is we see wisdom is – wisdom evolves in multiple species with brains. We see co-evolution around wisdom. The older you get, the wider you get, and it doesn't matter if you're a dolphin or a whale or a rattlesnake or a human. Wisdom is –
We co-evolve species. Life seems to co-evolve towards wisdom, or at least a large chunk of life seems to co-evolve towards this wisdom, which is to say, if everything's running off the free energy principle, this governs everything with brains. And that includes our machine brains and wisdom is where this points. That's a slightly hopeful idea. And that may be the optimizing function you're looking for, Peter.
but I could be totally wrong here. I think of wisdom, I think at the end of the day, wisdom is a function of having had experience that lets you know this path will lead to success, this path will lead to failure from my own personal point of view. And I do believe that AIs are going to develop the greatest wisdom. Why? Because they're able to create
forward-looking simulations of a billion scenarios where those simulations have high degrees of accuracy. And it will say, out of these billion scenarios, this was the best way to go. And that will be wisdom beyond just the brief experiences that the wise old counsel of 80 and 90-year-old men might have had. So I think AI is going to
by definition, give us great wisdom, if we're willing to listen. I love that view, to be honest, because believe it or not, you know, artificial wisdom is very different than artificial intelligence. Intelligence is a force with no polarity. Intelligence can be applied to good and it would deliver good and it can be applied to evil and it would, you know, kill all of us.
But wisdom generally is applied to good, to finding the ultimate solution or answer to a problem. Now, go ahead, Peter. Yeah, I want to go back to this idea that humanity...
won't survive without a digital super intelligence in the long run. You know, my concern is that we're going to have such turbulence. There's been a number of papers, you know, that recently there was a, you know, sort of an AI 2027 paper that came out that sort of had a bifurcating future, one in which we did extraordinarily well, the other in which the AIs destroyed us.
You know, this is Hollywood all over again. And 99% of all Hollywood is dystopian future films. One of the things I have to say, because I've been on a rampage for this, we humans need a positive vision of the future to aim for. We don't have that. We do. We don't have the Star Trek. Well, Star Trek has given us that. Yeah, we have Star Trek, but nothing recently. Right? I think...
I think the challenge really truly is we've prioritized our entertainment over the years above true reflection. And if you take anything from video games to science fiction movies to whatever, they've all painted that dystopian image
which I have to say is very unlikely when you really think about it, because if AI gets to the point where they are capable of destroying us that easily, we are so freaking irrelevant that they probably wouldn't even bother. I mean, think about it. I think it was, was it Trey or Hugo de Garis? One, I don't remember who said the more likely scenario is that they kill us
because they're not aware of our presence or, you know, like when you hit an ant hill while you're walking, right? But if you really want to optimize the human, you know, sort of the gain function that we need to aim for, if I look forward, I'd look to Star Trek.
And if I look backward, I'd look to the caveman and woman years. Right. And it's actually quite interesting because when you when you mention about, you know, when you mention how governed we are by greed and fear and, you know, and our egos and all of that negativity, it is actually because we want to survive.
And believe it or not, you know, survival could be, oh, I'm not really sure if 20 million is enough. I need to gain 20 more just in case something happens. Or if it's a survival of the ego, it's like if I have 200 million or 2 billion or 20 billion or whatever, and the other has 21 billion, like what's what the fuck is wrong with me? Okay. And that unfortunately is what plagues our current modern world. Now, the the the
The reality is, if you really think about humanity, the purpose of humanity since the caveman and women years was to live. And for some strange reason, we've optimized so much to achieve that objective and forgot that this was the objective.
Right. So, you know, again, as friends of the camera, we speak about those things quite a bit with, you know, the question of
what you know, you go through seasons in your life, and there is a season where you want to maximize and a season where you want to build and a season where you want to look attractive in your middle age or whatever crazy stuff that we have. But eventually, there is a season where you go like, okay, so I'm not I've now lived and experienced so much, what have I missed? Have I actually lived any of that?
And believe it or not, as scary as it looks to have no job to go to in the morning, if society provided, then you'll go back to a much safer caveman woman scenario where, you know, there are no threats, there are no famines. You just really live, enjoy life, connect, ponder, you know, reflect, you know, explore, which...
I know is very difficult for a lot of people. I do it for the first three hours of every day. It's pure joy, right? To sit really with your curiosity, if you want.
And then if you push all the way forward into Star Trek, that's sort of what the enterprise is doing at universal scale, right? It was basically, you know what, let's go and explore. Now that we don't really have to struggle with all of the wars and famine and shit that we've created on Earth, you know, now we can actually open up and create connections, not just with humans, but with every living being. I mean, lovely science fiction, right?
But at its core, I think it's exactly what we're about. You know, a full life where you completely connect and enjoy and feel love and, you know, and enjoy the pleasures of being alive and the curiosity to learn and explore and connect. And it's all at our fingertips.
If we just, you know, erase the systemic bias of capitalism that has gotten us here. I mean, thank you capitalism for creating all that we've created so far. But can we please change it now from a billion dollars to like what I do? One billion happy is a capitalist objective, but it's not measured in dollars. Right.
Mo, a question that Steve and I have been pondering in our new book is what is it going to take for humanity, for all of us to both survive and thrive in this coming age of AI? Right? So the survive part is an important element because as we see jobs being lost, as we see probability dangers we don't know how to deal with in terms of terrorist activities.
And thriving takes on a new meaning. I think it does take on the meaning that we just spoke about, right? For most of all of us, you say, tell me about yourself. Instantly, you go to what your job is, right? Instantly, you go to, I'm a VP here. I'm the CEO there. I do this. I invented this. I wrote that. Yeah. Right. It's an ego. It's an ego statement of who you are.
So the notion of surviving and thriving as we have, you know, intelligent systems that, again, exceed and then massively exceed our capabilities. Your thoughts there? Stephen, do you want to start or? Yeah, I think like here's the thing. I think that question was already answered in a funny way, though. Yeah.
Mo and I, we met a couple of years ago. And one of the things Mo said on stage at that time was, I'm done writing books. Because AI is coming. I'm done writing books. It's not going to happen anymore. What did Mo tell us he did yesterday? He wrote with his AI, right? Why did you write? Because it puts you into flow. Because it works.
Creates passion and purpose and intelligence and creativity. So like we have the answer to this question. We already know because we're biological systems and we know what the ingredients of thriving are. Passion, purpose, compassion. Like we have a list. And Mo gave like his own, you know what I mean? We have the super intelligent AIs and we're still going to do like, I don't know a coder who has stopped coding.
because the AIs have come along. They haven't. Like, they're still coding. Why? Because coding produces flow. Flow produces meaning.
creativity joy like this like we're wired this way so unless our fundamental hard wiring changes we already have those answers as well it's like global cooperation i don't think these are puzzles i think they're engineering problems at this point i think i think from sergey's perspective like sergey would say no no we we got the spark now it's engineering and i agree
So I could be wrong. That was my two cents, Mo. What do you think? Peter, what do you think also? I want to hear from you. I think you're brilliant. Mo, please respond. You're spot on for a very interesting reason, Stephen, as well. Because when you really think about it, you know, a writer was a writer, whether he used a feather or a pen or a typewriter or a computer or now AI, right? And, you know, if you look at my work, I've published four,
four and a half books so far, like I've published four and my fifth is on Substack, but you know, going to be published if you want.
But I wrote around 13 and the other eight I will never publish. I wrote them because, you know, if you ask me why do you write, like, why do I hug my wife? It's, you know, there is enormous joy in that. You understand? So having said that, Peter's question was, what would it take? And I wrote recently a piece that I called The Mad Map Spectrum.
And the idea really is it will unfortunately take a realization for humanity to change direction. And that realization will either be a conviction of mutually assured destruction or a conviction of mutually assured prosperity, right? And between them, there is no gray scale.
Unfortunately, so if the US at any point in time is convinced that this mad arms race to win intelligence supremacy is one that is going to lead to some harm to everyone in the world, they will stop.
And if they will stop competing, they will continue to develop, but they'll start cooperating. And if they're convinced that it will lead to an assured prosperity, that nobody's going to stab them in the back, that everyone is going to be enjoying a life that is very different for all of us, but full of prosperity for all of us, then they will stop. They will continue to develop the technology, but they will stop competing. And unfortunately,
If you look back in history, we're not able to guess those possibilities like a good applied mathematician on a game board. We have to hit them face on. Everyone in the world knew that a pandemic was coming. Everyone, right? Everyone who at least studied virology. Okay?
But it had to hit us in the face so that everyone stops. Okay. Everyone knows that, you know, trade wars are going to hurt everyone. But we have to put them out there and then fight through them and then eventually get to something.
And it's sad. I mean, perhaps what we are doing, and I've dedicated probably the last six, seven years of my life to is, is to say, it's really, we really don't have to hit our face against it. It's a simple game theory.
right understand that a you know a prisoner's dilemma where we are competing endlessly is gonna end badly can can we please stop yeah we already know it's tit for tat right like you want the other strategy you want the we we it doesn't matter how many ais we put on that it's the same thing with flow and compassion creativity like these problems have been solved um we we know these answers
This isn't like try to, it's not like we have to unify gravity and, you know, relativity. That's a hard problem. These are not. So Mo, I wish we were that rational and I wish we were that compelled for our optimization function being all of humanity. It's not.
And so I go back to... What will hit us? We're going to get a drastic event within the next two to three years. A drastic event that on one side will hit us very badly economically or on the other side will hit our fears very much. Or sadly, on the worst side, may kill quite a few million people.
And you could have a range of a hacker that simply, instead of attacking a physical place, switching off the internet or the power grid somewhere where the power grid is needed for life. Or you could, on the other extreme, get a hack into a bank or an evil monster.
war that goes out of control or machines that turn on to their makers or there will be some very big news headline, you know, as always, there will be it will last for 12 to 13 days before we start to talk about some kind of a pop star. But then, you know, behind closed doors, I think.
decision makers will wake up. Every day I get the strangest compliment. Someone will stop me and say, Peter, you have such nice skin. Honestly, I never thought I'd hear that from anyone. And honestly, I can't take the full credit. All I do is use something called One Skin OS1 twice a day, every day. The company is built by four brilliant PhD women who've identified a peptide that effectively reverses the age of your skin. I love it. And again, I'm
I use this twice a day, every day. You can go to oneskin.co and write Peter at checkout for a discount on the same product I use. That's oneskin.co and use the code Peter at checkout. All right, back to the episode. Going beyond that, because that's the use of AI by malevolent actors. You know, the interesting thing about US versus China is China's a rational actor. Oh, thank you for saying that.
Well, and U.S. is a rational actor. In other words, we're not going to do something that will destroy. Thank you so much for saying that. That's actually not usually how the U.S. media positions it. I also want to say that I think Deep Seek and the way Deep Seek was released, I think that was a very clear sign that China sees the same issues we see and they want to cooperate.
I think it was rolled out in a message. Yeah. The message was, I think it was, it was a very clear message that it doesn't seem like many people in America heard, but I was like, come on people like this is really clear and we're all seeing it. So like I look at deep seek and I look at,
what happened in China. And I'm like, no, no, we all see this. We all see that if we don't start figuring out how to cooperate and build this stuff together, we're screwed. So I thought that was really cool. I'm glad you see it too, Mo. A lot of people disagree with me on that one. The point I wanted to make was when you have a large population and you have a check and balance system, which you get with governance versus a
religious, um, uh, you know, war going on and individuals who are looking to, you know, create maximal destruction and don't have a check and balance system at all. That's where we're going to see, I think, uh, the, you know, the dystopian future or those, those activities playing out in two to, in two to three years. Uh, I guess I want to get beyond that and go back to the conversation of is, um,
Is a digital super intelligence a benevolent god or is it a terminator scenario that is... Because I don't believe that we're going to see the more intelligent AI systems become. I don't see them as Skynet, right? I don't see them as needing to destroy humanity. Unfortunately, Hollywood has built this scenario where AI is going to destroy humanity because it wants access to our energy and...
Oh my God, we have so much abundance in the world. I think what I'm looking forward to over the next 12 to 24 months, over the next one to two years is going to be the incredible breakthroughs we'll see from AI in physics and in chemistry and in biology, which will unleash the next layer of abundance.
So there are scenarios, however, where they could turn against us if we become really annoying. So imagine, you know, a world where...
You have to imagine a world where job losses will position AI as the enemy, right? So a lot of people would actually, who are not maybe fully aware that the layer beyond the apparent layer is how capitalism and labor arbitrage is the reason why you lost your job. It's not that the AI can do it.
But I think the truth of the matter is that you may be in a situation where you are, where you're going to see man versus machine. And then the machine will go like, seriously, don't annoy me, don't annoy me, don't annoy me. And then, right?
We could see that. But my perception is that in a very interesting way, I wrote a short book that I will never publish that I called Bomb Squad, which, of course, for someone with a Middle Eastern origin, you don't write those titles. But it was basically about diffusing, you know, problem solving using weights of urgency and importance and so on. So the idea is, you know,
And if you really look at our current future, I think the short term is both more explosive and more urgent than the long term existential risk, especially because I would say this very openly. I spoke about it with Jeffrey as well the other week. We don't know the answer to how to, even if we decide, all of humanity decides that we want to address the existential risk. We don't know how.
We do not actually have a technical answer to do it. So we might as well focus for now on the immediate short term clear and present danger.
and work on the ethics of humanity so that AI is deployed from the get-go in science and physics and discovering medicines and understanding human life and longevity and so on and so forth. If we, from the get-go, set them in those directions, then we're more likely to see an AI that continues as they grow older
to work with those objectives. Stephen, I'm going to go back to our quandary of surviving and thriving and the surviving side of the equation. How do you prepare Mo for this, what's coming? How do you think about for our kids, for our society, for our leaders? Are we just bumbling in the dark?
Or is there, I mean, which is the way I feel it. It's like, you know, we're just, we're bouncing around. We have huge political moves being made, right? We just saw in the last couple of weeks, you know, the entire AI royalty end up in Saudi Arabia. Yeah.
And then in the Emirates and, you know, playing off against China. And it feels like I don't say it's a random walk, but I feel like we're making it up as we go along. And there's there's very little wisdom guiding this process.
How do you think about that? How do we prepare for this the next few years? Is there any way to prepare? Well, I was actually, I was thinking Peter and I were in a room recently with the chief science officer for one of the big AI companies who I'm going to leave his name off.
but he's young and he was talking about AI dangers and he sort of got frustrated with the question from the audience. And his response was, you have to trust us. We know we're doing it. Everybody's sort of like froze, like everybody froze. Cause we were like, Oh God. Right. So my point is that,
Not only is it maybe to be right, like it's a random walk, but even when somebody says something like we're trying to train our AI to be moral and blah, blah, blah. When you hear somebody say that and you look at them and this guy was in his early 30s, that was my reaction. I was like, dude, like what?
You want me to trust you? This is like Mark Zuckerberg telling me social media is good for me or Marlboro telling me the cigarettes are good for me. It sort of makes me think that way. So I don't know if I have anything cheerful here because not only do I think it's a random walk, but I think when people try to steer...
we're suspicious of their ability. I'm suspicious of their ability to steer, right? That's the story I just told you is this guy is brilliant, probably way freaking smarter than me. And he's trying to steer and I'm suspicious. So like, I think it's, it's on both sides of this coin. I don't know if I have any good news here. Let me frame it in the following way. I think we are holding two different futures in superposition to go back to quantum physics. And if you would Schrodinger's cat, uh,
And one future, we're going to collapse the wave function to a brilliant, vibrant future for humanity. In the other future, we have dystopian outcomes. And the question becomes, how do we guide humanity towards this positive vision of the future? What do we do today? How do we help people
Is it, you know, Steve and I have been talking about this as it's mindset. You know, are we going to help people create the mindset and the frames that allow them to survive and thrive? Or is there something else that needs to be done? Yeah. Yeah.
So I'll actually first in one minute second what Stephen said. One of the top irritating comments I heard from Eric Schmidt, I worked for Eric for a while, so I respect him tremendously, but he said, we will need every gigawatt of power, renewable or non-renewable, if we were to win this race. And I think that's the kind of
blind blindness that you get when you're running too fast, right? When you're so afraid that the other guy will win, right? It's that it's those times when you start to make decisions that are not really responsible because you are blinded by something that you position as more important. Um, the way I look at it, Peter is, um, is, um,
I know it sounds really not positive, but there is positivity in it. I call it a late stage diagnosis, right? So what humanity is struggling with today is, look, we've been building a system, systemically prioritizing greed, prioritizing gains, prioritizing power, and so on, as you rightly said, for so long, right? That those...
systemically have built the world that we are in today. Okay. And the world we're in today is not healthy, is not healthy. Just even before AI, it wasn't healthy. You know, in my, in my for part one of the of alive, you know, I basically the book is three parts past, present and future. In the past part of the book,
More than half of what I write is not about AI. It's about capitalism. It is about the propaganda machine. It is about, it's about, it's about. It's about all of those things that will be magnified by AI. And here's the point. If this planet is sick, if you want, and it's in a late stage diagnosis,
A physician will sit you down, look you in the eye and say, by the way, this does not look good. But that statement, believe it or not, is not a statement of hate. It's a statement of ultimate care. Why? Because a late stage diagnosis is not a death sentence.
Okay. It's, you know, many, many patients who have been, you know, diagnosed with a late stage disease have not only survived, but they thrived.
right? And they thrived because they changed their lifestyle, they changed something. You know, this was even teaches all of us. The idea is that you can live differently. And when you live differently, you achieve peak performance, you achieve maximum health, you achieve, you achieve, you achieve, right? And I think that's what we as humanity need to start realizing that the systems that have gotten us here,
Okay, from a process point of view, have nothing wrong with them. But from an objective and morality point of view, have everything wrong with them.
Okay, you know, what good is it to be a zillionaire in a world where there is nothing you can do with your money? Okay, what good is it to be, you know, the first inventor of, you know, of an AI that basically renders you irrelevant. And I think that stop that need to basically pause and say, do we want this anymore? Hmm.
Sadly, it requires cooperation across human brains that Stephen rightly said at the beginning is not something we do very well. The other thing is I put forward the notion there is no on-off switch and there's no velocity. We are running open loop with yes and more and more and more as the, again, the objective function.
And there's no consideration for whether a GPT-5 or GPT-6 or a Grok-4 or a Grok-5 or whatever your favorite models are, are in the final result going to enable something that is massively dangerous for humanity. So if that's the case, I still go back to
What safety valves do we have? Because I don't see any action being taken by the leaders of the free world. Let me ask you both a question. If you could move to a planet that didn't have AI or where AI was developing at 10% the speed, would you leave? I'd be gone yesterday. I'd be set back to 2016 today.
I don't know anybody... Your answer to that is what? I would reset back to 2016 today. 2016. You know, I think AI today has all the upside and very little downside. I think it's AI in the next two to five years that I'm so concerned about, right? I mean, AI today is incredible. And I didn't say we're going to move to a planet where there's no AI. I just said move to a planet where it's going...
much slower so maybe we can start to think about it. I think everybody feels that way. That's a fantasy. Bigelow Space Hotel is coming to a universe near you.
so peter i actually think that you're you're accurate in your description of where ai is today but it's that five percent five degrees deviation back in 2016 that led us to where we are today right you remember things at the time where we geeks agreed that we're not going to put it on the open internet yeah it was google google developed this first and decided not to put it out there and then open ai says
Here it is and no one has it. Put it on the open internet, teach it to co-create and write more code and let, you know, start the party of the school children of agents talking to agents talking to AIs, right? Now, so I would definitely reset that. I would, however, say, look...
There are things we can do right now if we want to prepare. And, you know, I'll start with government. I don't think, I think we're asking government for too much when we tell them to try and regulate AI. It's almost like going to government and say, regulate the making of hammers so that they can drive nails, but nobody can use them to hit someone on the head. Right? It's a very complex thing to ask because they don't understand hammers. And believe it or not, even the guy that's making the hammer cannot do that. Right?
Right. So my ask of governments is regulate the use of AI. If someone uses a video, that is a deep fake video, and does not declare that it's a deep fake video, you know, developed by AI criminalize that make it legally liable to use AI manipulate information to, you know, manipulate populations, and so on and so forth. So this is the role of the government immediately is regulate the use of this massive, massively new technology.
For the rest of us, honestly, investors, business people, and so on, I ask for a very simple question. If you do not want your daughter or son at the receiving end of a specific AI, don't invest in it, don't promote it, don't use it.
Okay, so simple as that. If you, you know, if you if you believe this can be harmful to someone that you love, do not give it the light of day. Right. And then for us as individuals, I'll go back to the late stage diagnosis, believe it or not, the way I live now, and you guys probably know this about me, not in front of cameras.
I hug my loved ones and I enjoy every minute of every day and I prepare. I learn the tool. I am one of the better users of AI in the world. I'm in line with the technology. But at the same time, I'm completely back to the purpose. Right?
Right. Realizing that I will do the absolute best that I can to spread the message. I will do the absolute best that I can to to say that ethics is the answer, that if we show AI and ethical behavior, they may learn it from us just like they learned all of the other stuff from us. But at the end of the day, well, if it messes up, you are going to hit that dystopia. Not for not forever. There is a point in time.
Where AI takes over and says, okay, kids, enough stupidity. I'm in charge now. Nobody kill nobody. How far out is that, Mo? 12 years. 12 years. Okay. 12 to 15, just so that people don't come back and hit me after 12 if I'm still around. So, you know, I'm going to wrap this episode on this subject line.
And it's where we've come to before, which is in the near term, it's the use of AI by malevolent individuals that's our greatest fear. It's not China versus US, it's US and China against those malevolent players out there that wish to use this for greed and for vengeance, whatever it might be. And
I think that this is an unstoppable progression. I don't think, again, there's any on-off switch here. We're seeing a billion dollars a day being invested into AI, which is extraordinary. And I think that's going to continue to increase. We're seeing data centers being popped up every way, every place possible.
So, um, you know, I'm the world's, you know, I think of myself as the world's biggest optimist and I am optimistic about, about the impact of AI on human longevity, on new understanding the physics of the universe, on new mathematics, on new material sciences, on things that will create incredible abundance that Steven and I have written about and are writing about in our next book. And, uh,
I am looking forward to this benevolent super intelligence stabilizing the world. And that's what I'm hoping for. I agree. Steven, where do you come out on this? I think that you guys want to invent a code God to save you from yourselves is maybe the craziest thing I've heard since the guy from the AI company I won't mention told me to trust him. Right.
But I love you both. That's actually usually the answer that you get. You know, that the only way to save us from AI is to use an AI. Yeah. You know what the beautiful thing is? We're going to find out. Yeah. One thing I want to leave everybody with is...
Back to what we were saying about cooperation and the up-leveling of human intelligence and human consciousness and things like that. The human brain is widely considered the most advanced machine in the history of the universe. And we're just now, with the help of AI, figuring out how to up-level that, how to link it with other brains. Like the level of cooperative possibility. Let me back into it one second. Enlightenment.
which is a definable biological state that produces universal kind of compassion oneness with everything we're engineering it's a state that's available starting to become available almost on demand so when i say like there's new levels of cooperation coming that are emergent at the same time as the ai stuff we can't see them we have no they're emergent just like
Just like other things. So I think that rather than the beloved AI God, I think we're going to surprise ourselves. And I'm not the optimistic in the room, by the way. Like, Peter's the optimist in the room. I'm not the optimist in the room, but I think I'm more optimistic than Peter at this one.
I'd love for that thought to be actually implemented. I think that's something that we really need to think about deeply. If the short-term dystopia is a human error. I don't know. Who could we talk to? Peter, it's back to you.
Thank you. I appreciate that. You were saying, Mo, please close this out. I was basically saying, I think this definitely, definitely is the answer, if you ask me. If we just shift our mindset into cooperation, we head directly into a world of total abundance.
You know, I was with in a conversation with Eric Schmidt. He mentioned earlier and his point of view was until there is some type of a disaster and until there is something perhaps like a Chernobyl or Three Mile Island that isn't, you know, a 10 out of 10. It's a two or three out of 10, but it scares the daylights out of us.
we don't realign as humans. We don't realign. Um, and we blindly go forward as we have been. And I believe that it's, it's the human nature, um, that, uh,
That plagues us from being able to save ourselves many times until that child in us burns our fingers on the stove. Even after your parent has told you over and over again, you're going to burn your fingers on the stove. Stop playing with fire. Agreed. A hundred percent. But let's be hopeful.
Let's assign that task to Stephen to design an XPRIZE for human cooperation. Let's assign another task to Peter to make it happen. And yeah, let's assign a task for me to hug you both when you do.
I love you, Mo. I love you guys very much. Mo, how come you got to do all the hard work? Hugging you is hard work, Stephen. You understand that. You move too much.
All right, guys. Thank you, guys. A real pleasure. Thank you, Beau and Peter, for lending me your brains this morning. It was fun thinking with you. A fun conversation. I hope – I'm curious as people listen to this podcast, where do you come out on this? How do you feel about it? I'd love to see your comments below.
And do you have a solution that we should all be thinking about and promoting? You know, I'll ask my AI as well. It's not necessarily going to give me the best answer, but maybe our group mind, our meta intelligence here might bring us that.
Have a beautiful day, gentlemen. Go hug somebody. Talk soon. Thanks very much. All right, guys. Thank you. If you could have had a 10-year head start on the dot-com boom back in the 2000s, would you have taken it? Every week, I track the major tech meta trends. These are massive game-changing shifts that will play out over the decade ahead. From humanoid robotics to AGI, quantum computing, energy breakthroughs, and longevity, I cut through the noise and deliver own
only what matters to our lives and our careers. I send out a Metatrend newsletter twice a week as a quick two minute read over email. It's entirely free. These insights are read by founders, CEOs and investors behind some of the world's most disruptive companies. Why?
Because acting early is everything. This is for you if you want to see the future before it arrives and profit from it. Sign up at dmagnes.com slash Metatrends and be ahead of the next tech bubble. That's dmagnes.com slash Metatrends.