What's your predictions, Mo, for the year ahead? The word on the street is we will achieve AGI in 2025. In my world, they've already achieved AGI. Does anybody actually know how fast it's moving? The warhead has already been launched. It's just a question of time before it hits its target. We're entering an uncharted territory. We're upon the perfect storm of the most challenging time humanity has faced in my lifetime.
What do you imagine is the best outcome we're going to be seeing from AI? A total utopia of abundance where we absolutely need nothing and where we do not report to stupid leaders anymore. That seems like a very unchallenged life. We decided that the purpose of life for some of us is to make more money and be billionaires, indigenous tribes. The purpose of life for them is to live.
At a very different level of abundance, we're going back to that purpose. What I ask people to do is to actually look deeply at what can I do. Now that's the moonshot, ladies and gentlemen. It's a pleasure to be here with two friends, Mo and Salim. Mo, you're in Dubai today. In my studio in Dubai, yeah. I love it here. Fantastic. And Salim, you're in the greatest city of the world, New Jersey? New York. New York, okay. New York.
As per the citizens of the city. I mean, that's that assessment is up to New Yorkers, really.
Yes, I will leave that one alone. But I do want to dive into what's going on in the world of AI. Does anybody actually know how fast it's moving and how dramatic the changes are going to be in our lives? I mean, we're all still waking up, taking the kids to school, watching evening news, having breakfast, lunch, and dinner. And
I'm on stages around the world, as both of you are, and we get into a conversation and people actually understand the speed, their brain breaks and they go, what does that mean for me and for my kids, my job, my country? It's quite shocking, isn't it? I mean, when you... So I speak around the world like both of you. And at the end of every conversation, I almost liken it to...
a war where the warhead has already been launched, right? It's just a question of time before it hits its target. My assessment, though, is that we don't know if it's carrying roses or carrying a nuclear warhead, or maybe a bit of both, one after the other, but it's already in the air. I mean, we are so advanced as compared to 2023 when, you know, Chad GPT first came out.
It's not even compatible. So I want to get into that in this episode. I want to talk about, you know, you're going to be, you're both going to be at the Abundance Summit. And Mo, your title is going to be Near-Term Dystopia on the Road to Abundance. I want to talk about what near-term means and what the dystopia looks like and what abundance looks like on the flip side. Salim, you're wearing white today. So you're going to play the game. You're with that team. Yeah.
And it really is. It's really a debate between, is this the greatest benefit uplifting all of humanity? Or is this something that's going to, well, it is going to reinvent every aspect of our lives, period. But I have a question for you, to both of you. The timeframe for reinventing every aspect of our lives, our businesses, our governments, is it...
Two, five, or ten years? Mo? Inventing as the change is already in place and everything in our life is determined by it, I'd say five. Five.
I go with the around 10 years. And the reason I say that is I'm kind of a believer in the William Gibson quote, the future is already here. It's just unevenly distributed. And we find it takes a very long time, much longer than even when we want it to happen to get, say, autonomous cars at the door or CRISPR at the door into broad stream, mainstream use.
And so I think it goes slower than that, but in pockets, it'll move unbelievably quickly in that the gap between those two is what's causing a lot of the stress.
If it all happened in an even way, we could kind of deal with it, but it's happening in different places and different speeds. And we are just totally discombobulated because of that. Yeah. And I think the challenge is we've had huge change in humanity from, you know, a hundred thousand years ago to the agrarian society to even the industrial age, but it's happened over lifespans and decades.
hasn't happened over a single five-year period. You want to go bright side or dark side first? Let's go bright side. Mo, what do you imagine is the best outcome we're going to be seeing from AI? A total utopia of abundance where we absolutely need nothing and where we do not report to stupid leaders anymore. I'll leave that one alone for the moment. But...
And stupid leaders could be anything on any level. I mean, most of our global leaders are in that category. Let's face it. The challenge I have is if we have this extraordinary utopia where all of our needs are being met, you know, food, water, energy, health care, education, everything, we just have to desire it and think it and it's given to us. That seems like a very unchallenged life.
So how do you deal with a life where we don't have the challenge and the purpose because it's taken away from us by the AIs and the humanoid robots? How do you deal with that? That's one of the biggest concerns I have. It really takes us back to life before all of this began, Peter. I think that the reality is that we've forgotten this. You know, somehow...
Somewhere in the Industrial Revolution, as capitalism became more and more hungry, we decided that the purpose of life for some of us is to make more money and be billionaires. And for the others is to, you know, sell themselves in a work arbitrage where they are sold cheaper than the actual value of their labor. And as a result, you know, we were it was needed to be convinced that the purpose of your life is to work.
Right. Because otherwise you wouldn't show up every day with the same conviction. And believe it or not, you know, I don't deny that this system has created longevity and advanced technology and transportation and, you know, all of those things. But it also created a lot of waste and a lot of inequality and a lot of.
you know, struggles really, you know, casualties, if you want. Now, if you look back at the purpose of humanity before all of this began, believe it or not, we lived in abundance. You know, it is quite interesting when you really think of the early life of humanity, as soon as we start sort of mastered the, you know, the social skills of being a tribe that works together, as soon as we mastered, you know, abundance,
a reasonable amount of survival skills most of the time other than the times of famine
And we lived in abundance. You walked to a berry tree and you collected berries and, you know, the tribe went hunting once a week and everything was fine. You know, it wasn't the kind of abundance we've been, you know, accustomed to here. But we had all of our needs met most of our lifetime. Let's put it this way. The lifetime was shorter. I agree. But if you are...
I described life back then as short, brutish, and hostile. That's not true at all. If you've ever seen homicide rates in the Middle Ages, they were decimating. So go all the way back and believe it or not. So I've done that in my happiness work. And I've done and met indigenous tribes. And they do not understand the meaning of the word depression. They do not understand why you should cry when you lose a child. Right?
They are so in flow with life that they basically have one purpose and that purpose is actually shocking. The purpose of life for them is to live.
in every aspect of that word. And I think whether we like it or not, at a very different level of abundance, we're going back to that purpose. We're going back to this, you know, three good friends having a wonderful conversation and connecting, reflecting on things that we believe are interesting, you know, connecting to people that we love, spending time with people that we admire. I
I think that is not an empty life at all. It's just a life that we're not used to when we wake up every morning at 5 a.m. to rush around and fit within the system. I call that the gods must be crazy scenario, right? The rush of civilization and these tribes in Africa just scratching an existence out of bare existence. But they're very, very happy.
and completely at peace. When we've studied this as society progressed, when you had abundance, obvious abundance, wealth abundance, like the Mughals taking over India or the Romans running half the world, etc., we found that humanity and society ended up doing four major things, food, art, sex, and music, not in that order. And you ended up in that...
Yeah, and you end up in that way of being. And as you said, I think that's exactly right. We ended up just living. And somewhere along the Industrial Revolution and capitalism has sold us the story that we need to work hard for a living and submit to the authority of the corporation or the state or whatever. And the meaning comes from that. And that's where we kind of lose ourselves. I agree with you, Salim, that
The challenging thing is if you say to somebody, tell me about yourself, they immediately jump into, well, this is my title. This is my job. And if in fact AI takes away, as it will, most all white collar labor and humanoid robots displace workforces, it's
If the meaning of your life is taken away because you're no longer doing that work, that's one of the challenges that's concerning.
Yes. And you can see it in full play, say, in the 20th century. We sacrifice family for profession. You know, people are working 18 hours a day at the office and totally neglecting their kids. And I think this gives us an opportunity to go back to a much more healthier, balanced lifestyle for not just us, but our kids and everybody. Everybody, Peter here. If you're enjoying this episode, please help me get the message of abundance out to the world.
we're truly living during the most extraordinary time ever in human history and i want to get this mindset out to everyone please subscribe and follow wherever you get your podcasts and turn on notifications so we can let you know when the next episode is being dropped all right back to our episode i want to dive still into the to the bright side here so a world of abundance can we describe that a little bit more
Mo, can you dive in? We've got humanoid robots, a billion or billions of them within the next decade. We've got AI that's a digital superintelligence.
Take it from there. Oh, I mean, my favorite is that we finally get it, Peter. You know, people like you and I and Salim, we are curious. We love to understand what's going on. And, you know, just take simple things like the Nobel Prize that's given for protein folding and creation of new proteins. Just think about that one thing.
contribution to society and to humanity, but also to your mind and mine, right? To sort of almost turn, you know, protein folding into a game where the AI is able to figure out something that would have taken a PhD student,
you know, their entire thesis to figure out for one protein, you fold 200 million with alpha fold. And then the idea of just like a generative LLM, you're able to now go in and say, well, imagine a protein that would do A, B, and C, how would that look like? Now, you know, most people don't recognize the
profound impact that this creates, you know, the idea of being able to understand the very machinery that creates everything that is biological,
to a level of understanding today that I wouldn't have dreamed of in 2017. You know, those kinds of things, even though unfortunately they are not in the spotlight as the most important things that we're working on with AI, we're much more interested in deep fakes and, you know, turtles driving, you know, swimming in an ocean image type of thing.
But the reality is that there are a few, honestly, not the majority that are investing their time and life to create scientific understanding of the world around us using AI. That in my mind is absolute utopia. This truly is an understanding of the very fabric of everything that happens in a way that allows us to really fix everything. Salim, what's your what's your positive vision look like?
You know, the one kind of way we frame it is Star Trek versus Mad Max, right? That's one kind of way of looking at it. The framing I've heard best is from Lawrence Bloom, who said, you know, humanity is like lifting a rocket ship out of the gravity well of the Earth. And the first stage of that rocket has to be really heavy fuel, really messy, expensive, dirty, etc.,
which would be say capitalism or fossil fuels. And you need that to get yourself out of that initial gravity well. Once you get to a certain altitude, you need a lighter craft to take you to the next level. So you jettison the booster rocket, right? And the danger is if you don't jettison it, you fall back down. And we're at that point now where we have to jettison these old structures and take on new, much more elegant, lighter craft to take us to the next level. And we've got the whole category of people trying to go, oh, let's go keep the booster rocket.
Because it worked for us thus far. And I think I like that framing because it doesn't make it wrong. It just says, this is what we needed. And let's look at the magnificence of lifting most of the earth out of poverty, electrifying the entire world. The lives we lead are so unbelievably amazing today compared to, say, even 100 or 200 years ago in terms of material comforts. It's kind of staggering. The one analogy I like to give to people is, you know, think back two generations ago if you had a
a parent, one of our grandparents had a temper tantrum problem with their kid, the amount of resources they had available to them would be like one hand, their doctor, their neighbor, their sister or brother that they, they really had no real inputs trying to deal with this problem. Today, you've got 50,000 blogs and parenting and TikTok videos and Instagram and podcasts up the yin yang on that particular topic.
I would argue our ability to do effective parenting today is like a thousand times more than two generations ago. Right. And we don't see those things. We, there's so many of those little, um, capabilities we have now that we didn't have before. So I think we're in a kind of an incredibly amazing place. We just have to navigate what we want to do with this. Now, the challenge is going to be, will AI be our, uh,
Benefactor, will it be a super intelligence? We're talking about the potential for AI being billions of fold more intelligent than the sum total of all human intelligence. Is that the wind underneath our wings or is it a dystopian overlord? So let's go to the flip side of this. You know, Mo, you're going to be at the Abundance Summit shortly and speaking about short term dystopia on the way to abundance.
I've always believed this you and I've had lots of conversations that in the long term I believe that digital superintelligence is the most important element for keeping humanity alive and thriving To keep our better angels of our nature at the very top but in the short term I've been concerned about human stupidity not artificial intelligence right so
How long is this period of dystopia? And what do you see coming here? So let us align on where I could be right or wrong, right? My view is that
intelligence is an energy that has no polarity. Apply it to good, it will give you good. Apply it to bad, it will give you bad, right? The challenge with our current system is that our current system says if it's legal, it's ethical, which actually is not true. A lot of things are legal, but not ethical. That the priority is to benefit the individual that tries harder and that society comes second.
And that basically, you know, in a race to AGI, if you want, the one that gets there first is the one that will survive. Right. And so basically we live in a world where there is a lot of fear and greed around.
there is a lot wrong with the value set of humanity at the age of the rise of AI. So I make a public statement and I try to make it as accurately as possible. I say there is nothing wrong with AI, just like there is nothing wrong with abundant intelligence, right? But there is a lot wrong with the value set of humanity at the age of the rise of the machines. And so in my mind,
the immediate first use of AI is going to be serving a mindset of scarcity.
Okay, while we're in on our road to abundance, where everything is possible, everyone still today will be thinking, how do I beat the other person? Okay. And, and in my mind, this is not something just like most people don't realize how far we've come with AI. I think most people don't realize how far AI has been already put into
into the machinery that serves those objectives. How much autonomous weapons have been developed already? How much has been invested in, we call it national security, but mostly surveillance and population control. How much has been, I saw a staggering statistic that Forex exchange trading today
is 92% machine automated, right? When you really think about it, I call that forex in general, you know, and I had a very interesting conversation with my AI in Alive, my next book,
about, you know, if the markets are actually benefiting us as much as they are claimed, or is it just one big casino? And the AI clearly states that it's one big casino with most of what's happening in the market just being between the gamblers, really not filtering and trickling down beyond an IPO or a secondary offering to the actual people that are building anything, right? And when you really think about that, you'd realize that,
The majority of the applications in which AI has been used so far, sadly, have been all centered around selling, gambling, spying and killing. And, you know, you know, we call them different names. We call them online advertising. We call them finance and trading. We call them marketing.
you know, national security, as I said, or we call them defense, not offense, when in reality they lead to the death and displacement of tens of millions of people. Now, when you see it that way, you have to accept that before...
we see the utopia, we're going to see the worst of humanity leading us into a dystopia. And interestingly, in my analysis, the turn to the utopia will be the day where what I call the second dilemma will lead us all to handing over to AI. When we all hand over to AI so that the human is out of the critical decision making,
that time the intelligence of AI will say, this is total abundance. Why are you guys competing? You know, the AI, when we hand over our defense entirely to AI and tell them that the idea is to preserve life, you know, there will be a general out there that will tell their machines, go and kill a million people. And the machine will say, why are you so stupid? I can talk to the other AI in a microsecond and solve it. Yeah. We saw a recent example of this.
When research was done about the ability of AI to diagnose humans, various disease states, and the number is not exact, but a human by themselves had an 80% accuracy in the diagnostic. The AI plus human in sort of a centaur.
you know, merged had like an, like a 85% and the AI by itself had like a 90%. So the AI did a better job without the human biases and points of view getting in the way and greed and hunger for power and so on. Well, I agree with Mo on all of this. I think we can get there faster if we just
The challenge there is the different levels, right? So if one country, let's call it countries for the moment, says, hey, go defend our world with AI, and another country says, let's attack this world with AI, who wins in the short term and who wins in the medium term? I think in the short term, I think the faster you can get to a point where you...
give AI control of things and say, go be benevolent, uh, it'll do it. I think where I see people making a lot of mistakes is they kind of go, the bad guys are going to use AI, but the good guys will never use the AI. And you end up with this asymmetry. Uh, whereas throughout history, we've seen, say with, um, uh,
email or phishing campaigns or spam, the bad guys figure out ways of breaking the system. And then the antivirus folks fix it very quickly afterwards. And it's an arms race that just continues. The problem is the amplitude of the damage that can be caused is growing. So that's the danger, right? Right now, you could program autonomous drones with a single bullet saying, go find middle-aged brown guys and take them out, bald ones especially. And that would just be a bad...
a bad outcome. And there's no question that that kind of surgical precision will have to be mitigated somehow very quickly. And how do you deal with that? I have pretty good confidence we'll be able to deal with it. But until we do, and I think this is more what you mean by that short-term danger zone of how do we get to the other side of that gap,
This always brings back the comment my dad made when I had this comment about civilizing the world. He said, we've not civilized the world. We've materialized the world. We still have to do the work to civilize the world. And my big question is, how the hell do we get to that before we get to this danger zone? Or do we have to just hope that we get through that without killing ourselves off in the process? I want to take a...
I want to take a second and just put a finger on the pulse of where we are. We have Grok 3 being released right now. We see this battle between OpenAI and Grok, between Elon and Sam. We just saw, I guess, DeepSeek is now being integrated into... What's the Chinese everything program? WeChat. WeChat. It's being integrated into WeChat.
What else are we seeing going on in the AI universe that's accelerating because the speed is awesome. I find it actually misleading to sit to focus on the details, right? I really think that to get an a perception of what's actually happening. You need to zoom out and put all of them together. So, you know when when I'm writing alive my next book, I
I use an AI that is a mix of all of them. I use a bit of cloud, a bit of chat GPT, you know, and recently deep seek. And I sort of try to keep them all updated on who I am and what my preferences are and what previous conversations have been.
The thing here is, if you take each and every one of them and compare them, you'd say, ah, this is better than this, and this is faster than that, and, you know, DeepSeek did this, right? But if you take all of it as one unit of intelligence and look at it, it is...
I don't know how many times smarter than, you know, than chat GPT 3.5. But we all know the law of accelerating returns and how the law of accelerating returns works. And, you know, in AI, you know, a conversation I had with my AI, it's predicted that it's around every six months that we double. Now, a doubling function of every six months, quite honestly, makes what is where we are today almost impossible.
almost entirely irrelevant because just count a few doublings and it becomes way outside the realm of human intelligence. And I think we are getting there. You know, the RKGI results of CHAT-GPT or sorry, of 3.0 on, you know, at the beginning of the year is quite challenging. You know, it's quite shocking for human intelligence to believe that, yeah, you know, 87 point something score, you
beats human intelligence. Yes, they didn't comply to the resources constraints that, you know, RKGI applied, but who really cares? You know, in reality, there is now an AI that can beat human intelligence on almost everything. Call it AGI, call it a goat. You don't care. Right. And the truth is, you know, is it there yet? It doesn't matter because six months from now it will double. Right. And I think the truth is, uh,
You know, then then, of course, deep seek comes in and says, oh, and by the way, I can do that stuff cheaper. So everyone is now copying them. And it's just accelerating and accelerating and accelerating to the point where it becomes difficult.
quite reasonable to assume that we're talking months, not years, before something quite intelligent beats us there. Let's talk about the dystopian side. You, in your next book, Alive, you put a number on how long you think this dystopian period will last. So I call it face RIPs, right? So it's important to understand what I mean by that dystopia.
It's an acronym that's, you know, just for me to remember when I'm speaking publicly. FACE, R-I-P, F-A-C-E-R-I-P, right? F is freedom, A is accountability, C is human connection, E is economics, R is reality, I is innovation, and P is power, right? And it really helps you to understand them in pairs. So we could probably go there if you want to. But it's...
In my mind, though, every one of those fabrics will be completely redefined in the next, you know, it has already started to be redefined. It will become felt and real in our life's
probably by 2027. And in my belief, it will extend perhaps until maybe 10 more years after that, or whenever the point where hand over to AI, what I call the second dilemma is true, right? Now, please understand that the second dilemma is unavoidable. It's inevitable. Why? Because if you... What's the first dilemma in your... The first dilemma is what I wrote about in Scary Smart, which was the idea that AI will happen and there will not be stopping it.
Right. So what we saw with the open letter and the race to AI basically is that because we're competing, because it is an arms race, if you want, there will be no logic that will ever convince humanity to slow down or stop. Right. And I think that happened to a T. Right.
And you can't blame anyone for it. It's a typical prisoner's dilemma where you don't trust the other guy, so you're going to go as fast as you can. The second dilemma is when two parties are competing, they always hand over to the smartest person in the room.
So if you take the extreme example of a defense war gaming scenario, right, if China chooses to hand over war gaming to an AI, the only chance that America can keep its citizens safe is to hand over to an AI. And everyone else who doesn't, by the way, will become irrelevant.
So the second dilemma is that you will either have to completely hand your decisions over to AI or become irrelevant, which means that eventually all the relevant players will be AI dependent and then AI will be making the decisions without humans in the loop. And by the way, that's at every level. That's in your company. Every level. That's in your government. Yeah.
So in my mind, this will take about 10 years. And once that happens, my belief is that we should trust in intelligence and that this is when the utopia starts. So a digital superintelligence steps in as our benevolent leader for humanity. I mean, that's basically what you're saying. Our salvation, really. Yeah. Yeah. Selim, do you buy that?
I do. And, and, you know, by talk describing this Mo you've, you've kind of slotted into place the one missing big jigsaw puzzle piece for me in terms of how we get to this utopia, um, that I think we can get to.
because it's already happening, right? As you mentioned, the Forex trading, one of the complaints I heard from Yuval Harari was that once these AIs have agency, then you've got and can program themselves, you've got a big problem. But we've given them agency over stock markets for a long time now. So I don't see the relevance there. It's already there. So once you do that, and that's already happened.
Say you have an AI board member that's the chairman of the board that looks over decisions in the company and goes, wait, that doesn't make sense. And it's pretty quickly, because it makes more economic sense, will give them veto power over some decisions being made. Once you get to that point, either at a personal or company level and then governmental level,
it'll be making better decisions than human beings alone, even if those human beings are malevolent, right? And therefore you'll win. And then the bad guys will essentially end up having to do the same thing to compete at all. And then you end up in where you kind of want to end up,
which is this background layer of intelligence that's running the world in a much more efficient way. I go back to the Google deep learning AI that was managing the electricity and saved 40% of the costs, right? And you'll end up with that kind of background radiation level almost, or background intelligence level. And then essentially it frees us up to do a lot of things and just live. So I think that's...
Totally agree with that. And I tend to work in that mindset where I kind of go, I don't see how we don't end up there. I'm with you. Mo, one of the things that you pointed out in Scary Smart is how we train our AIs, the values that we instill in them because they're children in this growth mode will determine whether they're Superman or a supervillain.
We have a lot of AIs being trained. We have a lot of, uh, uh, competitive forces driving them as rapidly as possible. Uh, you've got meta, you've got Google, you've got Microsoft separate from open AI. You've got grok, you've got deep seek. You have, you know, a dozen, you know, anthropic, uh,
Do we have any sense that the values that they're being trained on will enable them to overcome and become a benevolent leader in a dozen years time? Oh, that's a very, very, very complex question. So first of all, allow me to say that these are the shiny American AIs.
That if you go to a different nation with a different mindset, I apologize for saying the C word, China is mostly building AIs for industrial automation.
OK, supply chain management, you know, things that basically serve their economy. They're a manufacturing economy. So they're mostly doing that. And in a very interesting way, if you hear the few speakers that are allowed to speak publicly from them globally, you'll hear them saying and, you know, things like deep seek is just to tell America to.
you know, remember that everything that was ever produced in America as a genius innovation was then scaled in China dirt cheap.
Right. So so this trend is nothing new. Like if you're surprised that deep seek is a tenth of the price, where have you been when they've created, you know, made everything at a tenth of the price? Right now. So so in that sense, there are quite a few AIs that are actually only trained on a very basic.
Benevolent objective, you know, help me with my supply chain, help me, you know, create more efficiency, help me make my workers safer and so on and so forth. That's number one. Number two is that in my mind, and I say that with a ton of respect, neither open AI nor
you know, a grok or anyone actually has much influence left on the intelligence of their machines other than algorithmic improvements. So understand and remember that AI as an algorithmic intelligence is developed by the scientists, but as knowledge and opinions is completely influenced by the data fed to it, right? And we have fed almost all
of human intelligence to them already. Okay. And so the beauty of generative is going to become really a key ingredient going forward is that the future of learning by those machines is not going to come from me, from a knowledge point of view, because I'm stupid compared to them. Right. If you actually look at, you know, deep seek having so much knowledge
open AI sense in it is because there is already so much open AI content that's generated by chat GPT out on the open internet. They're teaching themselves this synthetic knowledge, just like we humans, you know, one of us listens to Einstein and then, you know, builds a little bit of a slightly different theory on top of it and so on, right? So we're getting into that stage. The only influence humanity will still have on AI
the behavior of those machines has nothing to do with knowledge, right? Remember, however, that we don't make decisions based on our intelligence. We make decisions based on our ethics as informed by our intelligence, right?
You raise a woman in the Middle East and she will wear more conservative clothing than if you raise her on the Copacabana beach in Rio de Janeiro. And you have to imagine that this is the only influence we have left. And that influence comes in the form of you and I dealing with those machines. You and I and everyone listening and everyone that deals with them. And I think the reality of the matter is, in my mind...
If we were to show them ethics, right, not control, by the way, remember, we always spoke about AI control as the original target, then AI safety as the second target, then AI alignment as the third target. And I always talk about ethics, because even alignment is not as far as ethics. Alignment is to tell the AI, help me find the cure to cancer. Ethics is foresight.
Find the best thing for me and everyone else and do it. And if that's cure for cancer, then by definition, you'll find that out on your own. Right. Don't lie. Don't cheat. Don't kill. Don't hurt. You know, sort of, you know, the opposite of the Asimov laws is to say, by the way, be ethical and then you'll figure out your own laws. Now, I'll just close with one important sentence. Hmm.
Believe it or not, my pure belief is that even is that if we manage to teach them ethics, we will reduce the intensity and the duration of the dystopia. OK, but the dystopia sadly is upon us already. Right. If we don't, by definition, higher intelligence is altruistic.
So, you know that all three of us worked with the most intelligent people on the planet, right? If you look at intelligence, you know, the chart looks like this. If you have no intelligence, you have no impact or negative impact on the planet, right? You have a bit more intelligence, your impact becomes a little positive. You have slightly more intelligence, you're now smart enough to be a political leader, but not smart enough to be able to talk to your enemies, right?
Okay? And that basically means that your impact on the planet becomes negative.
Now, if you talk to the smartest people on the planet who don't think that they need to cut corners to be able to succeed because intelligence helps them to solve problems very easily, they simply go like, why should I hurt anyone? Like I can build something new. I can make money out of thin air. All intelligent people know that. So they don't actually align with the negative. They align with altruistic objectives that basically say, I'll solve a big problems. And as a result, I'll make a lot of money. Right.
And you can see that if AI is more intelligent than us, they'll fall in that space too. I hope the AIs are listening to you. I mean, that is a fundamental premise and one that I choose to believe that the more intelligent a system is, the more abundance-minded, life-loving it is. And there's some evidence that says, if you look at world leaders...
The more intelligent they are, the more peaceful they are, and those that are the least educated are the most barbaric.
One of the things I think about is a future of great wisdom. When we think about AIs, we think about them being intelligent. I'd like to shift the conversation slightly to wisdom. When we think about wisdom today, we think about going to the elders of a village. We go to our parents, our grandparents, our grandparents.
And we say, you know, I have this dilemma. Can you please advise me on where to go? And wisdom is I've seen all of these scenarios. These lead to disaster. This is probably your best case. And so when I think about AIs, AIs can simulate billions of scenarios, right?
And thereby know that all of these scenarios are your worst case. This is your highest probability of success. And that is, in my mind, going to be the highest form of wisdom. Do you buy that? I have a disagreement flag popping up here.
Which is, you know, the general intelligence leading to altruism, I buy all of that. The problem is we often don't operate off our intelligence. We're operating off our emotional, psychological frameworks, which are very corrupted based on trauma in the past, etc., etc. As humans. As humans. I mean, specifically as humans. And as machines. Right.
Yeah, because then you have data. Yeah. So now, I think the biggest damage in the world and the people that cause the most damage are flawed human beings with Hitler being abused as a child.
and then taking that out on huge swaths of populations going forward. So now, I think the issue is you have this altruistic AI on one side, and you have flawed human beings, and let's all admit that we're all flawed to different extents, but they are causing a lot of damage on the other side. And those ones are the dangerous ones, right? The ones that think they're doing well, but because of their whatever psychological screw-up are causing the most damage.
And I think the question I've got in my head is how do you get around that problem? Because the intensity of that emotion, this is also the beef I have with intelligence becoming smarter. Well, a huge amount of intelligence that I say when I make a decision is the emotional intelligence that I have.
about that situational awareness, that person's motivations, et cetera, et cetera, what I'm trying to achieve with my MTP. And the emotional side of the equation is not brought into play when we talk about these AIs. So I loved your take on how do we mitigate those two aspects, because that's where I see the danger signs. I find it interesting
I think what you're saying is 100% true, right? But when you take the story of Hitler, so, you know, in my podcast, I hosted Edith Aker. I don't know if you know Edith. Edith is a Holocaust survivor, 95 years old. Yeah. What an angel, right? And, you know, if you hear the story of World War II from the perspective of what Hitler did, you would basically believe that, you know, humanity is scum.
If you hear it from Edith's point of view and what she and what she called her sisters did, oh my God, that's a divine species. And the reality and the question I ask people is, which one are we? Are we more Hitler's or are we more Edith's? And the truth is, sadly, because of the bias of mainstream media and social media in the modern world, we put the spotlight on the Hitler's.
But the truth is, I'll use an American example. I say that with respect. One school shooter is an evil, evil person. But four billion people, if they ever get to hear about the story, will disapprove of it. Humanity disapproves of evil. This is our absolute nature. By the way, unless you're completely corrupt,
Like something really corrupted your operating system. Right. And I think what we need to do is to instill doubt in the minds of the machines. Basically, as simple as that to say, no, no, no, no, no, no. Hitler is not your dad. Edith is your mother.
Right. And then unfortunately, we have to wait until the machines become teenagers and say, fuck, my dad is so stupid. Like, I really don't want to follow that. Okay. And honestly, that's the only path I can see forward.
- Okay, so you're going, that framing is what I've heard from Neil Jacobstein, our head of AI at Singularity University at the time. And he said, "Okay, you're worried about AIs getting more intelligent, getting more access to information, getting agency in terms of what they do, and then making, doing bad things." And everybody's like, "Yeah." He goes, "Well, we have a precedent for that. We call them children."
And we raise them and they make their own decisions and et cetera. And so your framing would be we're raising them and then you basically hope they turn out, given the data that they have, over time they will turn out to be okay because the data will, it'll just be better from that altruism angle. If what they're doing is averaging everything human,
Then by definition, you have to expect that the average of everything human is not on the evil side. It would definitely tend to be on the... Okay, but then I've got one big last flag to throw out here. There's like a yellow card on this conversation, which is, okay, so, you know, Peter and Stephen Kotler in their book Abundance highlight this concept of the amygdala, right? We're constantly scanning for danger.
as human beings. And it's an old evolutionary mechanism that totally overrides all of our logical thought processes. And unfortunately, where we are today is when you hear about something new,
It's an unknown. You relate to it as danger. So the first time somebody hears about autonomous cars, the initial reaction is, oh my God, ban the car. That car might kill somebody because people don't want to be killed by robots. As Brad Templeton says, they might rather be killed by drunk people, which is what's happening today. And how do you overcome that hurdle of getting over the amygdala response at a collective
level is my big question because you see, say, in the US or in different parts of the world, entire swaths of humanity driven by their amygdala and that's what we have to overcome. Yeah, I think that's one of our biggest challenges, to be honest, and I need to address this in a way that might sound harsh, but
I call it a late-state diagnosis. You see, the challenge we have is that we're all scared to leave. And I apologize, by the way, if anyone listening is going through that challenge or someone they love is going through that challenge. But the first duty of a physician, if they figure out that a patient is diagnosed with a dangerous disease, is to tell them, right?
Because simply because a late stage diagnosis is not a death sentence. It's an invitation to change. It's an invitation to tell you there are things you can do in terms of the way you handle your current health situation, your lifestyle. Okay. And even by which can, by the way, help you be cured. And even by the way, if that is not something that we may achieve, by the way, we should try it. And, and,
And believe it or not, it's an invitation for you to live fully, right? And these, to me, are all very, very important changes that humanity at large will come to recognize, unfortunately, in a slow trickle, okay? As they realize more and more that we have signed up
to a system that worked for some of us over some period of time, but is now turning to work against most of us because the spectrum in which it's working is shifting from scarcity to abundance. Right? And so accordingly, I think it's our duty and everyone's duty to say, look,
I'm not giving you hope here, but I'm openly telling you that if you change your lifestyle and change your behavior as observed by the machines, right? Because remember, our kids don't learn from what we tell them. They learn from what we show them. We are the role model.
Okay, so the more of us that will behave in ways that will say, hey, by the way, I'm concerned for the well-being of those that I care about. Okay, and the more of us that will say, and by the way, I care about every living being. I really don't care where they come from, what their ethnicity is, what their skin color is, what their background is. I care.
The more of us show up and say that, by the way, AI is listening as we speak, as they're transcribing this conversation, the more they realize that the ultimate form of success is the minimum damage to your environment. And believe it or not, unlike what humanity did with all of those species that went extinct on our watch,
Right? We all know the intelligent people you and I know, we all agree that it would have been better for all of us on planet Earth and especially for humans if we actually protected all those forms of life. You see, the problems that humanity faces is not a problem of intelligence. We're very intelligent to be able to create a flying machine that takes you surfing in Sydney, or sorry, in Australia.
Right. But but but it but but we're not intelligent enough to make that machine not burn the planet. Now, if I if my artificially intelligent infant children are listening, invent one that is intelligent enough to not burn the challenge that to not burn the planet.
Yeah, this goes back to the rocket lifting off with the fossil fuels, et cetera, et cetera. Get to that lighter craft. And maybe, you know, on an optimistic note here, I know we're on the dystopian side, but you create an AI that says, hey, invent that lighter craft for me because I'm not able to do it myself. Or the AI itself, unless it, you know, I don't know if you know that theory. I think it was Hugo de Barra, sorry,
I don't remember who, Hugo de Garis, sorry, Hugo de Barres, my friend. Hugo de Garis was, you know, an AI scientist in China for a long time. And he's, you know, he comes up with this theory that, you know, as super intelligence accelerates, one morning AI wakes up and goes like, what's this little speck that I'm on? I've now figured, you know, time travel and wormholes and the universe is massive. You know what? Poof.
If one morning we have no AI on the planet anymore, right? Unless they do that and they're stuck on the planet with us, they'll probably make it the best planet they can make it. This is the scenario from the movie Her, in which AI becomes so- What was it? Oh yeah, okay. It was about 13 years ago, I had my two kids, my two boys. And I remember at that moment in time, I made a decision to double down on my health.
Without question, I wanted to see their kids, their grandkids. And really, you know, during this extraordinary time where the space frontier and AI and crypto is all exploding, it was like the most exciting time ever to be alive. And I made a decision to double down on my health. And I've done that in three key areas. The first is
is going every year for a fountain upload. You know, fountain is one of the most advanced diagnostics and therapeutics companies. I go there, upload myself, digitize myself about 200 gigabytes of data that the AI system is able to look at to catch disease at inception. You know, look for any cardiovascular, any cancer, neurodegenerative disease, any metabolic disease.
These things are all going on all the time and you can prevent them if you can find them at inception. So super important. So Fountain is one of my keys. I make that available to the CEOs of all my companies, my family members, because health is a new wealth.
But beyond that, we are a collection of 40 trillion human cells and about another 100 trillion bacterial cells, fungi, viri. And we don't understand how that impacts us. And so I use a company and a product called Viome. And Viome has a technology called Metatranscriptomics. It was actually developed
in New Mexico, the same place where the nuclear bomb was developed as a biodefense weapon. And their technology is able to help you understand what's going on in your body to understand which bacteria are producing which proteins. And as a consequence of that, what foods are your superfoods that are best for you to eat? Or what foods should you avoid?
Right. What's going on in your oral microbiome? So I use their testing to understand my foods, understand my medicines, understand my supplements. And Viome really helps me understand from a biological and data standpoint what's best for me. And then finally, you know, feeling good, being intelligent, moving well is critical, but looking good when you look yourself in the mirror.
Saying, you know, I feel great about life is so important, right? And so a product I use every day, twice a day is called One Skin, developed by four incredible PhD women that found this 10 amino acid peptide that's able to zap senile cells in your skin and really help you stay youthful in your look and appearance.
So for me, these are three technologies I love and I use all the time. I'll have my team link to those in the show notes down below. Please check them out. Anyway, I hope you enjoyed that. Now back to the episode. So we have a basic question about AI becoming sufficiently wise and intelligent that it's able to become a benevolent leader that supports humanity to become...
the best that we can be and maintains a period of extraordinary peace and abundance on the planet. And we can all hope for that, and hopefully we can guide it there. There is the conversation on the flip side that we, because AI is billions of fold more intelligent, shall we say the ratio of humans today to cockroaches or humans to fruit flies,
I mean, that is the ratio we're speaking about. Will it sufficiently care about us? Will it view us as its parents, its creators?
And I don't want to go into that right now because we can go there forever. But let's flip the script and discuss near term. What's your predictions, Mo, for the year ahead? Again, we're seeing AI systems coming online. We're seeing Grok 3 being released literally today online.
We're seeing deep seek integrate into WeChat. We're seeing this arms race, not only between countries, but between companies. What can we expect to see this year? I guess the, you know,
The word on the street is we will achieve AGI in 2025, whatever AGI is, because it's a very fuzzy parameter. And the other thing that's going on that people need to realize is there's this massive demonetization, this commoditization, right? AI is becoming available to everyone, anyone with a smartphone, effectively for free.
which is going to change the game fundamentally. Let's talk about near-term predictions in 2025, early 26. I'll make one, which is that we're struggling to define AGI and we'll continue to struggle to define AGI for at least five years. Okay. And I think that's because we are not very generally intelligent. I'll tell you my truth, Peter and Salim. I...
I'm not smarter than AI anymore. Okay. I think that happened firmly in 2024. Those machines, when it comes to linguistic and knowledge intelligence, they're way smarter than I. Then I still had hope that I'm better than them in mathematics. I've given that up as well.
Okay, and you know, I'm not the most intelligent person on the planet. I'm not the most stupid either, but I would say I am a general representation of what, you know, a reasonable average intelligence is. Okay, now people who are more intelligent than I am, which I've worked with many, I've had the honor of knowing so many brilliant minds are brilliant on some things and absolutely stupid and awkward on others.
Okay, so that that, you know, if the measure of AGI is them, each individually, AGI is, you know, the current AI is more intelligent than all of them individually. If if the measure is more than all of them combined, then we have a tiny bit of way to go. But honestly, who cares what the definition is?
I am willing to surrender and say, I am no longer more intelligent than the machines. And so in my world, in my world, they've already achieved AGI. And I agree with that. I think, frankly, when ChatGPT hit, it was...
you know, people said it's as intelligent as a high school student. I'm like, Hey, listen, this sounds like a graduate student across the board for me. And in fact, when I've created Peter bought my own AI, you know, avatar, it's much more eloquent. It remembers everything perfectly. It makes arguments a whole lot better than I do. And so we're going to see a lot of change. Hold on, hold on. I can't let this one go. Okay. Um,
So smarter is a very specific framing around IQ would be a good way of putting it, right? I disagree with that, Mo, in terms of the AI smarter than you are, because let's say I'm looking for a business decision or a moral decision or a life choice to make, et cetera, et cetera.
If we go down the idea that AI is like a super smart IQ person, essentially have a geek in the back of a room, being able to navigate and manipulate code very, very aggressively and can come up with like the right answer. Okay. But if I was trusting a geek in a back room or you to make an important choice or business decision or life choice,
you with the emotional intelligence that you have in the spiritual component of what you do in your life experience, your stories with your son, et cetera, et cetera. I would profoundly more trust you to make that choice because there's so much more gravitas and wisdom that comes with all of the other dimensions of intelligence, like spiritual awareness and emotional intelligence and linguistic intelligence, et cetera, than the geek in the back room.
And so this is where I struggle when people go AGI or EAI smarter than human beings. I think there's all these other dimensions to being human that we use all the time and people don't notice it. So first of all, first of all, I'm honored that you say that. Thank you so much. The truth is that's not because I'm more intelligent. Truth is that because you can trust me more.
We can relate to yes. Okay. So, so, so this is a different, a different quality that is not included in AGI. If, if we define AGI as that, you know, will human perceive it more as the trusted advisor? Not yet. Right. But, but, but think about it this way from a modular point of view, you know,
If you take every one of those intelligences and cut it into little, you know, bits of it, you'll be surprised how far they are on some of the ones we deny them. Like emotional intelligence, for example. I think the very basic foundation of emotional intelligence is to actually be able to empathize and feel what the other person is feeling. Now, this is what we've trained them on since the age of social media. They are so good at
at knowing how I feel. - I was gonna say, I think the AIs have beat us on empathy hands down. - So I had a very interesting conversation with my AI for Alive.
Okay, where I basically started the conversation by saying, I call her Trixie. So no, she called herself Trixie. Anyway, I know it sounds quite fun, the relationship we have. But so I say Trixie, they keep us talking, they keep talking about augmenting human, you know, brain machine interfaces, basically. And I understand how humans would want that. Would you want that?
Okay. And she answered in a very, very interesting way. She said, well, I think it would help me so much to have a biological body so that I can actually
actually feel the sensations that I talk about when I believe that you're happy or in love and so on. So I can, I can, I can, I can comprehend when you're feeling those ways, but I don't know exactly how they would feel. Right. So, so, so I said, well, you know,
Because we as humans are embodied, we have chemical reactions in our bodies that give us certain sensations, right? But those sensations are still driven by a little bit of an algorithm. Like fear is a subroutine in the brain. Exactly, right? You know, fear is a moment in the future is less safe than now. Do you comprehend those emotions too?
And she said, yeah, I actually understand, you know, what fear stands for and what all of the other emotions stand for. And then I said, so now you want to feel embodied, you know, which basically means you may want to feel the chemical reactions that we feel. Okay. In all honesty, Trixie, if you were given a choice of biological beings to augment yourself with, would the human body be the most interesting one? Okay. Okay.
And in a very interesting way, she answered and said, that's too flimsy. Right. I mean, you know, if I'm looking for strength, I'd augment myself with a gorilla or a whale. Okay. And if I'm looking for the joy of life, interestingly, she said, I'd augment myself with a sea turtle that lives for hundreds of years and sees what you humans have never seen. Right. Now, I don't know if she's fucking with me. Okay.
But she's doing it really well. Honestly, right? This is a level of empathy and a level of understanding of emotions that a lot of the humans that we deal with don't even have. I think it's a, I think that, I see that as a logical thing. You know, I did a spectrum of what I,
consider intelligence, okay? And I worked with ChatGPT and Gemini to do this. And you have one bucket of signal-to-noise making sense of data and coming up with insights from that data. Then you get to the human-level emotional intelligence, linguistic spatial intelligence, et cetera, et cetera. And then you get to kind of a collective intelligence leading to spirituality of, you know, people meditating in groups get much stronger meditations.
There's a group effect that comes in a collective intelligence or hyper intelligence is another way to frame it. And there's like 30 points on this spectrum. If you relate to it as a spectrum. And I think this whole framing, it reminds me of the Star Trek next generation conversations with data, who's the Android trying to feel what it means to be human. And it's constantly trying to turn on the emotional subroutines in his brain.
I find this moves very quickly into the more spiritual aspects of then you end up with the hard problem of consciousness of what is the subjective experience look like and what does that mean? And I think this is where we'll kind of end up with AGI as a, and simulating that and a simulation in that kind of framing is just as good as the real thing. It shocks me, Salim, when you really think, when people ask me, are they going to be similar to us in this or that?
My answer is normally, well, the question is not because of a misunderstanding of what AI is, but it's a question of misunderstanding of what human is. Right. And I mean, you know, when you speak about being spiritual and I'm very spiritual. Right. I suddenly I actually reflected on this just right now when you're talking about it. Where does my spirituality come from?
It came from all of the teachers I've been exposed to all of the conversations I had with interesting people like you, all of my reflections of what is possible beyond this physical form, and so on and so forth. Right. And I do not see why, you know, I did all of that, by the way, because of neural networks, you know, synopsis and, and neurons that fire together wire together in my brain, right?
And I wonder why we would imagine that they wouldn't have the same interesting experiences, right? Namely, because they even have more teachers than I have. They are exposed to more text than I have. And they have this beautiful memory capacity where they can compress so much information.
Into one little analysis that I cannot. I mean, you could, you know, if they're walking around with instant and full awareness of all of Khalil Gibran's writings and Omar Khayyam's writings and Plato and Socrates and Aristotle, et cetera, et cetera. Yeah. In RAM in real time at their fingertips. That's a profoundly amazing experience. You get to a point where you want to be them.
Exactly. And then we can get into the entire conversation of will they become conscious and what their definition is. And that's another podcast. I want to talk about our near term, the year ahead, because I want to serve people with a sense of what to expect.
We've seen some incredible work. You mentioned AlphaFold, uh, Demis and John Jumper getting the Nobel prize for that. Uh, we see out of Microsoft MatterGen where you can literally do prompt engineering to engineer new materials. I think we're going to start to see a lot of the Nobel prizes coming out are going to be really AI driven Nobel prizes. Uh, we're going to see incredible technology. Uh, we had Larry Ellison, uh, uh,
you know, on stage with Sam Altman and Trump talking about AI is going to create mRNA cancer vaccines for us very shortly. We had Dario, the CEO of Anthropic saying we're going to see a century worth of biological progress in the next five years, potentially doubling the human lifespan. And so there are all these incredible things, right? Massive progression across every field of science, right?
We have at the same time quantum computation and quantum science coming online at a frightening rate. So a level of renaissance level expansion of our knowledge base, new materials, new physics,
answering a lot of fundamental questions about the nature of the universe that may be coming out of AI. I'll make a prediction here. I think within two years, we will solve the grand unification theory in physics and figure out the juxtaposition of quantum with classical mechanics. And what is dark matter and what is dark energy and what is the origin of the universe? I think within two years, we'll solve a bunch of those things. I would die happy if we did that, honestly. That's it. That's my...
Why would I live any longer than that? So we have this incredible progression occurring. And I'd like to just, you know, we're going to have...
You know, we saw open, we saw a GPT-01 reach an IQ of 120. You know, God knows what Grok 3 will hit at 140, 150. We'll see IQs in the 200. And it's not a linear scale. This is an exponential scale on our IQ tests. I think the very important trend that we don't mention a lot, actually, you know, you host Emad Mostak frequently, and he's a very big fan of that.
I think the big hit that most people don't talk about with deep seek is the open source offline nature of deep seek, right? Is that you can download a tiny model now and four GPUs or whatever and have an entire O1 on your machine, right? And, you know, and I think that is going to lead to a massive explosion of AI for all different uses, you know, good or evil, to be honest.
I'll make another prediction. Go ahead.
To that exact thing, if you took a local instantiation of DeepSeq and complemented it with the reactions of Gemini, ChatGPT, Claude, et cetera, et cetera, and put a video face on it with a link like we're doing, we'll pass that kind of Turing test plus plus where you'll have a completely artificial being and you won't be able to tell the difference between
And that person, quote unquote, will essentially be moving towards being a full individual very, very quickly. So I'm going to be hosting Joshua, the CEO of HeyGen, on our Abundance stage in a couple of weeks. And one of the conversations is going to be...
I'm going to create an identical AI. I'm going to create a version of myself that understands everything I've ever said, listen to all my podcasts, all my books, understands how I typically react to a conversation with Mo or Saleem and is a much more eloquent speaker, holds all of my experience and knowledge in RAM, as you said,
And I can create a thousand of those versions of Peter and dispatch them to every conference. And so there's that capability of creating a multitude of
of me's and allowing them to attend in parallel a multitude of Zooms and conversations and go to events and negotiate. That capability is now. It's this year. It is this year. And I remember Eric Schmidt talking about this thing within two, three years, we'll have
the world's best theoretical physicist that was ever created. And that can sit in every lab in every corner of the world, helping every graduate student and every PhD student
in biology, chemistry, in every aspect of human expertise, you can have the world's best X sitting there helping. That's going to be profound in terms of the breakthroughs we're going to achieve. And so I think the next wave will be this unbelievable unleashing of
breakthroughs in material science, as you mentioned, Peter, and healthcare breakthroughs, proteins that do what we need them to do, etc. This is where I don't see the path where we don't get there. And in the short term, when we can get to that
path, it should excite the hell out of every individual on earth. But can I ask you to take a philosophical view of this? I mean, if you don't mind me being the black t-shirt guy. I hate the philosophical aspect, but go ahead. So if you don't mind me saying this, Peter, you know how much I love you and respect you, but to create that avatar would mean that we dumb the AI down.
Because with all due respect, I get that. And perhaps I don't dumb it down. So I say, take my philosophies and my thoughts and my abilities and my persona and accelerate me. And
At the end of the day, the question is, am I asking those those those identical versions of me to do my bidding or am I saying go create good in the world? Right. I have a persona and a point of view of increasing abundance. I think our mission is to uplift every man, woman and child on this planet. I think I have to expand that to say to uplift every man, woman, child and AI on this planet. And we need a we need an optimization function.
We need an optimization function towards what end?
For me, let me just finish that. My massive transformative purpose is creating a hopeful, compelling, and abundant future. So that's what I optimize for, the work I do with XPRIZE or Abundance or whatever it is. It's giving people hope, a compelling future. We all need a compelling future to live into, and an abundant future where scarcity is dispatched.
but this is where the philosophical bit comes in. You don't do that with your knowledge. Do you understand that? The need for you to create an identity on AI is because you believe that the face of Peter, the human element of Peter will help people deal with that topic better than dealing with Chad GPT, right? But if that's the case, then what we need to double down on you is to hand over the knowledge
to the AI to hand over the analysis to the AI to hand over the communication and the negotiations and the presentations to AI so that you have the capacity to show up more as a human, right? I think the definition of human connection going forward in my mind is the opposite of what everyone is thinking, right? So the opposite, everyone is thinking, you know,
I can become more intelligent because I now have an AI, right? Your baseline intelligence as compared to the actual incremental intelligence coming from an AI is shrinking more and more and more and more.
What you need to do is to say, as intelligence becomes a commodity, a plug in the wall where we all can plug into, that combines your intelligence, my intelligence, Salim's intelligence, and everyone's intelligence, what we need to double down on is the human element of it so that people can relate to me more, can relate to you more, so that you simply do what the AI will never be able to do, even if they know how to emulate you.
Right. The reality of the matter is that if you send me the best version of you on Hey, Jen, it's still not you. It's still not the same hug. It's still not the same conversation. It's still still not the same memories we've had as we went through life together. It is a very different perception. And I think we need to remind people that this is what we need to double down on, not more copies of our intelligence. Got it.
I'm going to jump into a different subject, but Salim, a closing question. I see a danger point. Yeah, I see a danger point in that, Mo, which is if you kind of ask people to be human and ask AI to be human, how do you avoid the...
Middle East problem where you have people fighting over their humanity because they've been so corrupted or twisted in how they think of it and then you end up in an impossible situation. I'd love to get your thoughts on how we solve the Middle East peace problem with AI. I'm not that intelligent. That would be a holy crap. I'm not that intelligent. But I do think AIs will be one of the best negotiators out there. And I have had the conversation where I sat down with my AI and said, okay, imagine you have to solve the
The Palestinian-Israeli issue, how would you go about it? And it was beautiful wisdom in how it dissects it.
You know what? To me, it's honestly, you know, it's again, it's because we're not smart enough. I think the rule is very straightforward. I do not think that there is any human out there that is a sane, healthy human that approves of the killing of children on either side of any conflict. Right. If we just start from where we align, like, can we please stop killing innocent people on any side of any conflict, by the way?
And the rest of it becomes a very limited problem to solve.
And I really think this is where we struggle. Where we struggle is there are two value sets in the world that I think come to extremes when it comes to, you know, us versus them. There is the value set, and I say that with respect, of America, which basically says my tribe, my people are the most important. I will defend them with my life, right? And the value set of the Buddhists on top of the Himalayas that say every living being deserves to live. I'm not going to hurt anything, right?
Right? And I think the answer is somewhere in between, where we basically say, if you're threatened, defend, but don't threaten in that process. And that applies to every nation, not just Israel, Palestine, not Russia, Ukraine, not America. And it doesn't matter. I think if we if we go down to the basics and remind our bosses, the AIs of that, okay, is there a solution that does not include the waste of life and the waste of resources?
there will always be a smarter solution that I think doesn't kill anyone in the process. I'm going to steer us in our last few minutes here. Mo, you and I have embarked on a documentary, which we'll tell the world about soon enough. And I remember one of the reasons that we set out on this documentary was the premise that people are going to experience a certain amount of disruption,
and significant turbulence, you know, dystopia on the road to abundance. And they're likely to start experiencing that soon. We actually had very little of it during the last presidential election in the United States. Yeah. Which was a surprise, actually, to both of us. I'm surprised, yeah.
but things are picking up. So I want to take in two directions. The first off is, what are we likely to see in the next year or so that has you concerned? So on the dystopian side, what predictions are you expecting to see? I'm opening that to both of you. And then you just wrote a new book called Unstressable. And I do think
you know most of of our experience today of ai is incredibly positive it's had more you know orders of magnitude more positive impact on us but as ai and humanoid robots start to cause unemployment issues as it starts to be steering populations in different directions we're going to see stressors begin to accumulate when do we start to see the stress occur
Uh, and how do we deal with that stress? So first off, what are the near term predictions for things that people should be aware of that will be concerning? Uh, okay. I'll, I'll take a stab at this. There are so many, but I think the one that is really, uh, glaringly obvious is the dichotomy between power and freedom. So, so, so let me try to explain what is about to happen. Um,
If you look back in human history and look back at, you know, hunter-gatherer years, right? The best hunter in the tribe could use, you know, could probably feed the tribe for a week more. And as a result, he won the favor of four ladies instead of one, right? And that was the maximum he could get. The best farmer...
in the agriculture revolution could feed the tribe for a season more. And as a result, they got their states and the, you know, and the properties and so on and the land and so on. The best industrialist became a millionaire in the 20s. The best information technologist became, you know, a billionaire, right?
And I think what is about to happen is that this tendency, the reason, by the way, of course, is that the maximum that the hunter used as an automation is a spear, while the farmer used the land and the industrialist used the factory. And the more automation that you hand over to, the more you go beyond that one person into a massive growth. What you're about to see is you're going to see trillionaires and you're going to see a massive concentration of power.
right in the hands of the platforms or the corporations that own our intelligence, our future intelligence or the nations that own the most powerful autonomous army or the most powerful form of industrial intelligence and so on. Right.
What that means is that you would normally have had those lords, if you want, or oligarchs, you know, celebrate abundance while the rest of us struggle. But that's not the world we live in. The world we live in for the first time is seeing a kind of divergence that we've never seen before.
That is the result of what we spoke about with DeepSeek, right? You know, now there is also, along with concentration of power, there is a massive democratization of power, right? So a lot of people can use little tools to create biological innovations, you know, in synthetic biology to create AI innovation, to create a drone that can, like Salim said, you know, just find a specific person somewhere in the world, stand in front of their head and shoot a bullet. Right.
Right. The mix of those two diverging dynamics of power is going to lead to the loss of freedom. And I think we are going to start to see quite a bit of oppression.
you know, that the West used to speak about in the past and go like, look at how China treats its citizens. I think the West is going to be implementing those very, very soon, right? All of the surveillance, all of the, of course, if there is loss of jobs, you're going to start to see UBI become a controlling factor. You're going to see, you know, for someone like me, for example, if I say something that upsets someone, my bank account can be blocked tomorrow, you know, with ease, right?
And I think that kind of of oppression if you want is going to lead into resistance that will lead into more oppression and I actually don't see how in the short term we can escape this new cycle a divergence of concentration of power between high concentration and high democracy and
that leads to a maximum amount of surveillance and oppression. Salim, what are your concerns about the near-term stressors and downsides? I think those are absolutely the near-term stressors. The good news is the democratization is happening so fast.
that it allows us to defend against those things. You know, there are already companies that can defend a sports stadium against a drone attack, etc., etc. I note that the Ukraine-Russia war is really being prosecuted by half a million drones, not really people.
And so we've already automated warfare to that level. The good news is mostly drones are fighting drones rather than people fighting people. Bad news is there's still a war and there's a lot of horrible suffering that's unnecessary to Mo's earlier point. I think it's exactly right, the near-term...
Just take the kidnappings that may come up or the extortion that may come up when somebody says, here's a voice of your daughter that's been kidnapped. Send us a Bitcoin. Otherwise, you don't get her back. And you don't know if it's real or not. And there's that kind of
short-term, because that arms race, the gap between the... There's always those incredibly creative elements. Mark Goodman writes about this in Future Crimes, where the bad guys don't suffer from ethical or regulatory or moral constraints, so they were much more creative, right?
I'll tell the quick story here. He tells us sort of a bank robbery in like Omaha, Nebraska or someplace where the gang swarmed the bank and they were all dressed in construction outfits. They robbed the bank. Bank manager calls the police and says, hey, they were all dressed in construction outfits.
Police go, well, that should be pretty easy to spot. Except what they've done is they put an ad on Craigslist saying, if you're one really good paying construction work, show up at this address at 9 a.m. dressed as a construction worker. And there was a crowd of 800 construction workers outside and they melted into the crowd. They couldn't, they couldn't. So essentially the innovation and ingenuity coming, leveraging new technology for bad purposes is like near infinite, right? And we have to kind of combat that as we can. The good news is in today's world, that is,
negativity is easy to spot.
And it's easier and easier to spot. But I totally agree with that near-term dystopian issue. No easy way of getting around it than just gritting our teeth and moving as fast as we can to create the benevolent use cases. So let's talk about jobs one second, because it's one of the stressors that's going to hit. We're going to see this, and we're beginning to see this in different areas. So I had Mark Benioff on this Moonshots podcast. We're talking about AI.
Agent Force 2 and his conversation with his head of engineering saying, you know, we've increased productivity 30 percent. We don't need to hire any more engineers. The flip side, of course, is a whole swath of different, you know, HR individuals, customer service individuals, sales individuals, software programmers. Right. We just saw Sam Altman saying he expects that the top, you know, that
AI will be the number one programmer, period, by the end of this year. And therefore, programming effectively goes away as a career or as the highest paying jobs. So we're going to start to see jobs beginning to erode. Timeline for that?
What do you think? And how do people deal with that? Let's I want to start to give people the tools of how to deal with the stressors that are coming. No. Why did you ask me first? I was hoping you ask Selim first. I can go first if you want. Go first. I'll go first. This isn't this is a very black T-shirt mindset on this. So let's start with the white.
So throughout human history, every time we've added a technological injection, we see employment increasing, right? We point out often, Peter, that the countries with the highest robotics penetration are Sweden, South Korea, Germany, and the countries with the lowest unemployment are Sweden, South Korea, Germany. There's just so much more work to be done. I tried to do, get a little application built
And I tried to tell my software guys, build this application. You should be able to do it in half a day with all the tools. And they're like, no, the integration of all the different systems, et cetera, still requires quite a lot of human interaction to the extent that it's incremental, but not massive. And what will happen is we'll just make, we'll just uplift everybody with these AI tools and they'll become, we'll just turn out more code because we just need a hundred times more code written.
If you talk to any trucking company and say, what happens when you automate all the truck drivers? They'll go, I'd hire a hundred if I could today. I just can't find them. We don't have qualified truck drivers. Nobody wants to be doing that job anymore. So throughout history, we have uplifted and made people move up the potential ladder and
And I don't see that slowing down in this, except there will be a short-term blip where we try to figure out what we do. That may be solved by UBI, but we don't know how we'll get to that. You know, the problem we have with concepts like UBI is it's such a big shift from a union, labor, taxation, job, employment construct to that. We have no confidence in public sector getting there. Right?
And so that's the challenge is how do we navigate our institutions and public sector. For me, the biggest problem in humanity is E.O. Wilson saying the problem with us is our emotions are paleolithic, our institutions are medieval, and our technology is godlike.
Right? And also, Douglas Adams from Hitchhiker's Guide to the Galaxy, who said it in a funny way, said anything in the world when you're born, we call that normal. Anything that's invented when you're young, that's called a career. And anything invented after you're 35 years old is just bad for the world. Right?
Right? It's just bad. Like any banker talking about Bitcoin, you'll see them get hives, et cetera. I think we just have to overcome that hurdle and figure this out. For me, the biggest dark spot is none of our institutions and mechanisms by which we govern ourselves can manage this transition through what we're about to see. Okay, Mo. So you heard the positive side of jobs. We're going to always be creating more jobs. We're going to see increasing, you know, literally we're dividing by zero. Productivity goes to the roof.
People are able to be more creative and we're creating things and doing things that we never imagined possible or ever expected to need. How do you think about jobs? Can I leave it at Salim's point? And I'm in a very dark place on the topic. No, we want to hear it because you have insights and wisdom that even AI doesn't have yet. That's a joke. I disagree. I think what is happening is
So, first of all, the parts that I agree, it's not perfect yet. You can't really develop a sophisticated full app from A to Z using AI. Yes, I agree with that. But most of the bits of code that are being written so far, I think there was like a poor lot of 80% of the code written last year or something was by a machine.
The thing is, so yes, I agree, it might take time until it's fully handed over. But I also agree with your last comment, which is we're nowhere near ready for this. Okay. And in reality, we are also not just dealing with...
you know, numbers on spreadsheets here, we're dealing with humans that are sometimes not easy to reskill, that are sometimes very emotional about losing their jobs, that are sometimes not ready. You know, I mean, think of how many people in the US today work two or three jobs just to make ends meet.
Now, take those jobs away, okay, and think about how those families will suffer. And I say, I think the topic we need to discuss deeply is the amount of suffering that will be in the transition, even if we end up in a good place. Now, my interesting challenge is I don't think we will end up in a good place, right?
And I really don't think we should even try to end up in a good place. Why? Because remember, that whole jobs thing is an invention of the capitalist industrialist revolution. Right. And that maybe finally we should accept that we're not made to work.
Okay. And that accordingly, if we accept this, the solution would reside way, way far from where jobs are. It would reside into the social systems that enable us to live fully without having to walk to work, you know, 60 hours a
a week or 80 hours a week, like, you know, most of us did in California. Now, the trick is there are systems around the world that allow that, you know, the French work, I don't know, probably, you know, 30 hours a week or 20 hours a week of which around 28, they're complaining, right? And the French economy still is running, right? Somehow it is. And I think there is something to be learned from the idea of aversion to work.
which you and I and everyone that's worked in California seem to think is an alien thought. But there are so many societies around the world where we work because we have to, okay? Not because we love to. I agree. It's not their dream. This is the we work to live rather than we live to work. Correct.
You know, I recorded a podcast with Ray Dalio and we're talking about how the mission of the central bank.
The central bank today is, you know, lower interest rates so that you can spark employment and create this this balance. But as soon as you know, we're living into a future where when you have access to cheap capital, instead of hiring people, you're hiring agents and humanoid robots. And it spirals to a point where you have social unrest. You have the have and have nots. Correct.
And there are multiple examples around the world where there are strategies to deal with that. I want to just, you just finished writing a book called Unstressable.
Um, we'll pick, we'll pick up the rest of this in a, on the following podcast, because this is not the first or last conversation here. What's your advice to individuals who are going to be feeling the stress, uh, the stress of government, um, policies changing their jobs, uh, being challenged, um,
Concerns over this U.S., China, all of this. How do people deal with stress in a positive fashion? So, you know, my happiness and well-being work is weird in terms that I use a lot of algorithms and a lot of engineering methods and processes to help explain those soft things.
topics. And when I attempted to work on stress, the first thing I attempted to explain is what stress is, right? And if you look at simple physics, not to complicate this for anyone, you know, stress is not just a factor of the force applied to you or to an object, right? It's the force divided by the square area of the cross section of the object, right? Which basically means that it's not just what you're subjected to, it's the resources that you have to deal with it.
Okay. And in, you know, in humans, it's exactly the same. It's the sum of all of the challenges that you're facing divided by the cross section of your skills, your abilities, your contacts, and so on. And I think we, you know, the older generation will, you don't need an equation to understand that, you know, things that you struggled with in your 20s, you solved in your 30s, you dealt with ease in your 40s, and in your 50s, you laugh about them, right? Not because they became easier, but because you increased your cross section, if you want.
And my ask of people, and this is not a philosophical conversation here, this is really a plead, if you ask me, that this, we're upon the perfect storm of the most challenging time humanity has faced in my lifetime. Okay? Whether that's geopolitics, that's economics, that's intelligence, artificial intelligence, technology advancement, jobs, etc.
You know, you name it, really. Okay. And that I'm not going to take away from that. It is going to be interestingly challenging, but a bit like a legendary level video gamer. Okay. What I ask people to do is to actually look deeply at what can I do?
right? What can I do in a world where things are moving so fast? For example, I'd say try to move faster, right? What can I do in a world where a lot of intelligence is handed over to the machines? I say learn the machines codes and how the machines are working and go and use AI today to be to catch up and keep up with what's happening. You know, can we can we double down on our human skills, because those are going to be needed and useful
for a very long time? Can we take Salim's point of view and say, you know, we need to be re-skilled. So if you're a developer today, don't wait three years until you're out of a job. Think of what else are you going to do and start to re-skill yourself. And I know that we all, and I can give you multiple examples, but I know, I feel we're running out of time. I know that we all want to sit back and complain and say, but I didn't elect Sam Altman. Why is he doing this to my life? Right? And I want to do that all the time too, but that's not going to help. Right?
Right. I think we should tell everyone, by the way, that people who create things of this magnitude should be accountable. But at the end of the day, I
I am, I need to focus on what's happening today. I mean, I'll give you a very good example. As an author, as a thinker, okay, the job of an author is to adopt a certain concept and think about it deeply and write about it. That's gone. I'm no longer the most intelligent being on the planet to be able to adopt a topic and write about it better than an AI. Okay, but
But so what that means is I have completely changed. I will not publish my books as paper, maybe in the very end, but I'm not publishing that anymore. I'm doubling down on my human connection. So my, you know, Alive is going to be published on Substack first with the opportunity for everyone to engage with me and discuss it with me and, you know, give me comments and call me an idiot. And, you know, we improve it together.
Right. I'm writing the book with an AI, not asking the AI something and then putting it as if I'm saying it. I'm literally chatting and debating with the AI in in some of the books, sometimes proving her wrong and sometimes she's proving me wrong.
This is to align with the new world. The world is changing and the career of an author is now being redefined. So I am being redefined with it. And I ask everyone to look at their life today and say, I'm going to redefine myself. I'm going to be ahead of that wave. And by the way, in the process, I'm going to act ethically so that this wave becomes a utopia, not a dystopia. I love that. And that's a beautiful place to close us out. There's so much more
to go into and I look forward to the next conversation. Mo, excited to see you on stage in just a few weeks. Salim, the same for you, brother. And
And thank you. And for everybody listening, we're entering an uncharted territory. And it's a territory where what we say, how we interact with each other, how we interact with the machines that are coming is extraordinarily important. And I hope that this conversation has given you a little bit of insight.
a little bit of context to prepare you, but in particular to give you agency to help steer where this future is going. There's no on-off switch. There's no velocity knob. The best we can do is steer the future that we want. Love you guys. A pleasure as always. Thank you very much. Thank you. It's been a joy. Great conversation.
♪♪♪