We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode What AI Can’t Replace - and Why That Matters with WSJ Bestselling Author, Faisal Hoque

What AI Can’t Replace - and Why That Matters with WSJ Bestselling Author, Faisal Hoque

2025/4/9
logo of podcast Smart People Podcast

Smart People Podcast

AI Deep Dive AI Chapters Transcript
People
C
Chris Stemp
F
Faisal Hoque
Topics
Faisal Hoque: 人工智能的快速发展令人震惊,它整合了数百万人的数千年知识,其智能程度已远超个体。虽然人工智能带来了许多机遇,例如提高生产效率、改善医疗保健等,但也存在着巨大的风险。过度依赖人工智能会导致人类丧失批判性思维能力,沦为机器的奴隶,最终可能导致人类的灭亡。我们需要在利用人工智能的同时,保持自身的独立性和批判性思维能力,避免被其控制。此外,人工智能的价值观导向也值得关注,如果人工智能的训练数据带有偏见,那么它可能会做出不道德的决策。因此,我们需要在人工智能的发展过程中,始终坚持以人为本的原则,并设置必要的防护措施,避免其滥用。 我个人经历了创业的成功和失败,也经历了亲人患病的痛苦,这些经历让我更加深刻地认识到,人生的意义不仅仅在于追求效率和成功,更在于体验人生的酸甜苦辣,在奋斗和磨难中获得成长。因此,我不希望为了追求便利而牺牲人类的独立性和批判性思维能力。 Chris Stemp: 我担心人工智能的过度使用会损害年轻一代的批判性思维能力,因为他们习惯于直接获得答案,而不是独立思考。此外,如果人工智能在各个方面都优于人类,那么人类的批判性思维能力还有必要存在吗?这让我对人类未来的发展感到担忧。 我个人也经历了类似的困惑,一方面我享受人工智能带来的便利,另一方面我又担心过度依赖人工智能会使我们丧失独立思考的能力。我们需要找到一种平衡,既能利用人工智能提高效率,又能保持自身的批判性思维能力,避免被其控制。

Deep Dive

Shownotes Transcript

Translations:
中文

The spirit of innovation is deeply ingrained in America, and Google is helping Americans innovate in ways both big and small. The Department of Defense is working with Google to help secure America's digital defense systems, from establishing cloud-based zero-trust solutions to deploying the latest AI technology. This is a new era of American innovation. Find out more at g.co slash American innovation.

McDonald's meets the Minecraft universe with one of six collectibles and your choice of a Big Mac or 10-piece McNuggets with spicy netherflame sauce. Now available with a Minecraft movie meal. And participating McDonald's for a limited time. A Minecraft movie only in theaters. This is Smart People Podcast. A podcast for smart people, where we talk to smart people, but not necessarily done by smart people.

Hello and welcome to Smart People Podcast, conversations that satisfy your curious mind. Chris Stemp here. Thanks for tuning in. The question of today is, is AI going to save humanity or replace it? I know that's a question that's been on my mind. If you use the pro version of ChatGPT for any period of time, it is shocking and scary actually.

So that's the question at the center of today's conversation with technologist, entrepreneur, and author Faisal Hoke. Faisal's newest book, Transcend, Unlocking Humanity in the Age of AI, doesn't just explore the mechanics of artificial intelligence. It takes on something deeper. What does technology mean for the future of our minds, of our values, our society, and even our relationships?

In this episode, I'm not trying to hype AI or fear monger or clickbait. I really want to cut through some of the confusion and just ask, what kinds of humans do we want to be in a world where machines most likely can now outthink us? You'll hear us wrestle with some hard but necessary questions. Are we outsourcing our freedom in the name of convenience? Can we coexist with AI without losing what makes us human?

How do we stay grounded, ethical, and perhaps most importantly, compassionate in a world where technology never sleeps? Faisal blends his experience as a successful tech entrepreneur with the wisdom of Eastern philosophy and a clear-eyed view of what's coming next. If you have felt both awe and anxiety about AI or just want a thoughtful, human-centered take on it all, this is your episode.

And a bonus, all proceeds from Faisal's book go toward cancer research, making this conversation about more than just AI and machines. It's about meaning, impact, and legacy. Let's get into it. Our conversation with Faisal Hoke about his brand new book, Transcend, Unlocking Humanity in the Age of AI. Enjoy. Enjoy.

So Faisal, I'm coming right out the gate. Should I be terrified of this stuff? Yes, you should be. I think collectively we should be terrified because a simple analogy, right? So you and I are individual, okay? So imagine not just thousands of us, but millions of us last 2,000, 400,000 years or even longer years of knowledge, right?

enabling this engine in an interconnected and a networked way. So it is already, even though it doesn't have that cognition level yet, which is about to come, it's already a lot smarter than you and I are individually thinking.

So the quality of stuff that it's beginning to produce from the creative side, whether that's writing, making a movie, whatever. You mentioned that coming up with new recipe, looking at a picture. You know, I love to cook, so you were talking about the chef. The other day, I went to not chat GPT, but Plod. I loaded up a picture of my dish, and I said, describe this.

And I was blown away with the level of accuracy it came up with. And it says, looks like you used lemongrass. You wouldn't know that just looking at the picture. But, you know, so yeah, I mean, I think that's... But there's also opportunities. I don't want to, you know, I'm not, as you saw in the book, I'm not completely pessimistic or completely optimistic. I kind of, you know, take this middle ground and...

And what will happen is what we do in next three, five, 10 years timeframe. There you go. Do you have a preferred favorite at the moment? And what does the technology kind of say is the best if there is? If you're talking about a cloud and chat GPT, we're primarily talking about generative applications.

AI side in the sense that next generation search and construction engine, you know, in the sense that it constructs whatever you're asking it to do.

That's primarily from a consumer point of view. If you look at it from an enterprise point of view, there's a whole slew of technology that has been around forever and ever with AI, like the predictive modeling and workflow management and the

And we're now getting into agentic AI, which is like when you, let's say you want to make a reservation, it will find you the right thing or you want to track. So I'm sure you saw those ads. I just saw it actually. We can talk about that. Yeah. So I don't have a favorite per se, but it's a hyper-focus on all this innovation. So all vendors are kind of pushing it.

the limits of where it can go. And also a lot of this is depending on processing power because a lot of churning. So now that we have this processing power from companies like NVIDIA and there's a huge amount of data centers that are being built where you can crunch this information and data is getting better and better. You know, it's funny. You mentioned the data centers. I'm not sure where you are in the world, but I live...

in Loudoun County, Virginia, kind of near this place called Ashburn. And guess where all the data centers are? I know it very well because, you know, I do a lot of government work. I live in Connecticut, but I go to Virginia and D.C. And I know that area very well. And it got started propping up when people started talking about cloud computing. Yep, that's exactly it. And that's when it started building up.

I know exactly where you are. Yeah, I've been here almost my whole life. And actually, a good friend of mine worked for AWS and now works for Google. And he was the one like, yep, you know, pointing it out because back then people didn't really know. And for the rest of the world who probably doesn't see these things, there's just these massive concrete buildings with like nobody who works inside for the most part, you know? So, you know, so that's my point that if you look, I didn't know you saw...

in Transcend, we have this kind of a historical perspective, evolution of AI. And the reality is that

consumer focus really got into it last couple of years, right? And it's getting hyper-focused. But behind the scene, the infrastructure and the models and, you know, and the theoretical computing and the applied computing has been going on for 50, 60, even longer than that. Because you can think about that, you know, we talk about the AI as a more of a, you know, this human, humans,

thrust to come up with this philosopher's stone that allows you to learn and transform anything and everything has been... I mean, you can go back as far as like 10th century, where people started talking about you need these mechanical things that will assist human being. They were not thinking maybe in the context of AI as we know today, but this notion of

something else that is not natural, it's something synthetic or mechanical, that has been around for decades.

Thousands of years. And just recently, it fully struck me how much smarter this stuff is than I am. And look, that sounds easy and obvious. But when you really think about this is more intelligent in many, many ways than I as a person, it starts to change your perspective of who you are on this planet.

Absolutely. Absolutely. I mean, you know, it's a, and you will never be smarter than this stuff because it's just because, you know, the simple fact is it's not one-on-one. Like when you and I are talking, it's just one person's knowledge and one person's. So you may be smarter than me in terms of maybe read a lot of stuff and whatever, but we're, you know, I mean, in terms of cognition level and computing level, most people are, you

pretty much at the same levels. Obviously, there are some people who are a lot smarter than other people and they're well-read and educated. But when you put thousands of these brains and thousands of this world knowledge that AI can tap into,

The fact is that the reason it's terrifying is because that is democratized and is super powerful. So what, you know, the collective impact of usage has a significance on human society. You know, it's not like

pharmaceutical technology or nuclear technology where it's very tampered. I mean, you can't just go and access nuclear energy or nuclear technology just because you felt like it.

AI is a totally different story. So there's a good about that it's democratized and you can use it and you can do a lot of things and you can be more productive. Companies can do more. But there's a whole host of anything from terror to lots of unemployed people to isolation. There's a whole slew of things that's reshaping what humanity is.

I have this working theory that if you took whatever we deem technology, all of them looked at every innovation and you said, what was its impact on humanity? I have this general theory that like 5% beneficial and 95% isn't. And what I mean by that is not, does it drive us forward? I mean, is that drive and is forward better for us as the animal that we are?

When you look at things like medicine, yeah, no doubt, right? When you look at agriculture, okay, yes. When you look at electricity, yes. But I think a lot of that is this kind of health and living.

everything beyond that, genuinely, I feel like the smartphone or whatever, it doesn't make our lives better, maybe more efficient, but not better. Given your interaction with technology in your entire career, like how do you feel about that? And then two, should we leverage that knowledge going forward with all technology? People often find my comments, uh,

kind of like a counterproductive or paradoxical because even though I'm a technologist, I build products, technology products, I use technology product. I've also kind of talked about this, there's a point of diminishing return, right? So the big tech companies keeps pushing these functionalities

And is it how much of that is really improving or is it really reverse improvement? Bill Maher, you know, has this skit on and off. He points out this thing called reverse improvement. All right. He coined it as all right. Because, you know, it's like, do you really need to like...

use a lot of those functionality that just keeps pushing on your smartphone? Do you really need to know, you know, get engaged every single conversation that's happening in social media? Do I really need to know that my next show that I should be seeing is this because I saw something else the night before? No, you don't. I mean, it's kind of takes away the mystery of living. And, you know, and...

One of the things that I've, as I've grown older, and this started like maybe 10, 15 years ago, I grew up in Bangladesh and I was kind of a literature, literature and philosophy buff, Eastern philosophy buff. And I read a lot of Sanskrit philosophical books and whatnot. And, you know, there's a middle way. I mean, you don't have to like go one extreme to another extreme.

Just because you can't doesn't mean you have to, right? And not everything you do makes your life easier. It actually makes life complicated. And we've kind of seen this decay, if I can use that term, right? I mean, look at all the division we have in our country and across the planet. I mean, there was always division, but it was not as, you know, as...

up front and exponential, the up front and exponential is because we've gotten very good at manipulating, utilizing algorithm manipulation to shape people's mind and how you think and what you do. And we're losing the critical thinking process, right? That's my biggest worry. And it's very easy to be, you know, like you've talked about, you ask, okay, what should I cook today? And, you know, there is the answer.

Right. You're actually like outsourcing your freedom. Right. So when you do that, does that make humanity better or does that make humanity worse? Because what is humanity? It's our freedom. So if you outsource your freedom, you're kind of losing your free, you know, what is humanity. Right. And you're going to become a slave of the machine. You know, that's that's all you're getting. This episode is brought to you by Indeed.

When your computer breaks, you don't wait for it to magically start working again. You fix the problem. So why wait to hire the people your company desperately needs? Use Indeed's sponsored jobs to hire top talent fast. And even better, you only pay for results. There's no need to wait. Speed up your hiring with a $75 sponsored job credit at indeed.com slash podcast. Terms and conditions apply. This episode is brought to you by Shopify.

Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.

You know what? You just made me realize, because I have done that recently with cooking and my wife takes the stance you had, which is like, come on, just think for yourself.

But what I've convinced myself is, again, I am preserving energy expenditure so that it can be used at a later moment on something more important, let's say. But if you carry that logic forward, it's just a constant wheel of doing that. And we've all seen that. But this makes it even more. Yes, even worse, even more prevalent. Yeah. You know, so the question is,

is faster is always better, right? And, you know, when you talk about exploration and imagination, right, there's something to be said when you imagine and explore by stumbling through things. You don't know it. So, you know, like, you know, you used to go to a library or, you know, like I have a library right in my back.

I would go and say, okay, let me pull this stoic philosophy thing, and let me pull this Eastern philosophy thing, and let me look at what somebody said, flipping through the... I don't need to... You can go on and give me the answer. But see, that's not... The retention...

from that instant process versus retention to exploration, which forces you to be a critical thinker, are completely different things. And this is my biggest fear that we're going to start, especially the next generation, because

what's happening. I mean, I had a conversation the other day, somebody asked me, look, Gen Zs are using the, you know, these stuff a lot more extensively than the older generation because, you know, because they are just technology uses like their extension, right? You're either text rather than call, you know, I mean, it's that kind of mentality. So they're used in that generation of AI and they find it very productive. But then my question is,

if I sat down with you and if I asked you, okay, how did you come up with that article or how did you come up with that thesis? I don't know whether they would be able to answer you. Right. So that, and, and there's a problem to that, right? Because you're not thinking for yourself. Well, and this is a perfect transition. I want to talk about kind of the future of it and what you talk about in your book, because I,

Man, if I just follow this logic forward, I'd say, but does any of that matter? Do that. Cause I have a, I have a nine year old right now who's using an app that is heavily driven by AI. So just for those listening, it's fascinating, right? You lay all your Legos on the floor and then you scan it with AI and it tells you what to build, what you, and, and like,

I don't like that concept, but he does. So that's not the point, right? But still, you get what I'm saying. The challenge is I could go, but you need that critical thinking. Why? If AI is just going to be better, which it will in all of this stuff, why do we need it? And then...

becomes the ultimate endpoint, which is like, when do we go extinct? I mean, genuinely, I'm asking you to tell me that there's a better outcome than this. That's what I'm doing. All you can do is how you make these choices, which, and, you know, scanning bunch of Legos and building something. It's not a, it's a harmless task, right?

Let me give you a much more dramatic example. You're saying, okay, in this population, based on the behavioral pattern, tell me who are the people that's going to be criminal and let's lock them up.

AKA, you know, that movie I can't remember, Minority Report, Minority Report, right? So is that ethical? I mean, you know, I mean, just because certain people have certain tendency based on some predictive model, you're saying they're all going to be terrorists or they're going to be all mass murder or whatever, right? I mean, that's not it. That's not, that can be ethical, right? So, so

If you're outsourcing this ability to think, you're also going to give away your ability to ethically, logically argue with yourself what's right, what's wrong, what's real, what's not real.

Should we do it? Should we not do it? Because you're going to get, you just, I mean, what is humanity at the end of the day? It's ability to make decision for yourself, right? That's one. And value judgment is based on your perception of value.

So why that matters is that if you have an engine that's telling you this is value. Okay, so what if I train that large language model based on, I don't know, some radical concept of value? I mean, take any radical idea. So whether that's...

radical Islamist or radical rights or whatever the case may be. And that's what value means.

Well, I mean, and then you see this, like I'm seeing, I'm sure you saw that there was somebody who was doing a demonstration. I think it was on a CNBC show or something asking Deep Seek about, so what do you think about the Chinese government? No, I don't know that. Tell us about what happened in, you know,

during the Chinese Revolution and the Tiananmen Square and all that stuff. And it just didn't give you any answer. So it's deciding what...

what it should tell you and what it shouldn't tell you. So you're not really knowing what's real and what's not real. Take that one step farther. If you know, you know, I mean, there'll be a lot of deep fake stuff that's already cropping up, right? It's getting harder and harder to, you know, decide what's real and what's not real. So if you just take it on a face value, this is all real.

I mean, we're going to get into a matrix. It's going to be like the movie Matrix. And he's not going to have any ability to think. Yeah. Right. But I do. So that's your, you know, like my son is 22 and he's a history buff and he's taking a class on ethics.

You know, and so his first class was actually, one of the examples we use in the book is that from Bhagavad Gita is the Hindu, 2000 years old Hindu, you know, religious text has talked about ethics and a story about, you know, how do you make decision? And the lesson is that you make decision by detaching yourself

from emotion and you make decision from a middle way where you, you get, you, you try to devote yourself to what's right and what's wrong and, and detach yourself from emotion. And that's the way you make decision. So if you're being manipulated by algorithm,

and you're not at the critical thinking capability anymore, then it's very hard to do that, right? So this is where it becomes highly philosophical. And so when you say that, well, I mean, do I really want to use my brain cycle on what do I want to play? What do I want to see? What do I want to...

what do I want to cook? Well, that's like daily mundane practice. You've seen the book, we talk about a lot of this Eastern philosophical thing and Buddhist philosophy of being very mindful and the fact that not doing anything allows you to get better at a thing and being focused on one thing. If you're being mindful

told what to do, how to do, where to see, then you are losing your humanity. Then you're a drone. The drone becomes humanity or a synthetic being and you become the drone. This is actually very poignant. And so I'd like to try to repeat it in the way I'm understanding it and see how accurate you mentioned this is.

It's almost like that slippery slope where what we're saying is right now for the average person, we can view...

outsourcing some things to AI as helpful, as productive, as efficient. However, each time we do that, we're making a decision to turn over the primary thing that is human, which is to think critically, make decisions based off of our own understandings of the world, right? That's what it is, is to be able to think, use that prefrontal cortex and

So every time we do that, we are both reducing our ability to think critically and analyze while also increasing our habit of relying on this technology. And as we do that over time, we turn AI into the terrifying future that we are currently talking about and fearing. The flip side of that is if we maintain that,

this humanity, what it is, the critical thinking, the analysis, then that allows us to potentially shape AI in the way that is helpful where it's helpful, but doesn't steal who we are. 100%. It's a collective behavior, right? So it's like, do you want to have a conversation with your wife what you want to have for dinner? Or do you want to have a conversation with

some synthetic thing that's telling you, you ate chicken last night, you should have beef today because you need to maintain a healthy diet.

balanced diet and you need more iron, right? Well, I mean, pretty soon you're not going to have a conversation with your wife anymore. You're going to have conversation with this thing because it's easier. It doesn't talk back. It doesn't argue. It's just telling you whatever you want to hear, right? So, I mean, at some point it may start, you know, when it develops more, I mean, now it's generative, right? So when it develops the next level of cognition, you can start arguing, right?

But when people, most people are not, they're not confrontational. People don't want to, so if you're not confrontational, there's a comfort in talking to something that just agrees with you.

But if you agree with everything and something agrees with you, then you're not really allowing yourself to flourish and living up to the potential that you were given when you were born.

And if you start slowly surrendering, I mean, this is why, you know, like the, I mean, we were already talking about social isolation in a highly connected world because of the social media. And we created this, you know, we started to create this persona in social media that's not real because, you know, it's the best food, the best travel, best clothes.

et cetera, et cetera, the smartest conversation. But that's not how anybody's life is. You get sick. You don't look good one day. You don't feel good one day. You have disaster in your career. So we've already seen that element of

It's just going to get hyper exponential because of the fact that now you don't even have to have your own thing. You could have a persona that mimics as you, right? So what happens? So it's not real. I mean, it's not real, right? So you're just living in an alternative and augmented reality, and that cannot be good for humanity, no matter how you look at it.

I just had this meta moment here where I realized like, okay, the equivalent is why do I want to have conversations anymore? Why do I thoroughly enjoy these types of conversations when really I can probably just plug your book and everything you've ever done in the AI and then X. But what it doesn't do

is it only knows what we have put into it. It doesn't know the inner working. So when I have an opportunity to ask a question and maybe meld two concepts that you haven't written about per se, I get a unique answer from your perspective, right? So that's one element. But I want to focus on this, which is,

This idea of what is humanity? You know, I've often thought that one of the things humans are incredible at, but, but can never change is our drive to accomplish, like to shape the world, to shape everything about the way we live. I think in a drive for that, that's why we're potentially top of the food chain. Okay. But what this conversation is making me realize is that,

maybe it's a fundamental shift in what we as humans should deem the most important aspect of us. And I want to ask you because of your Eastern philosophy, which is, it sounds like what we might be saying is, look, efficiency is gone or productivity is gone. We no longer own that. Not that we have for a long time, but like it's becoming very obvious, right?

Maybe instead of thinking about how we get more done, we have to think about really deeply a new driver that we should focus on.

Sure. How does that align with both your Eastern and Western philosophies? Take a very well-known Western thought process about Maslow's hierarchy, right? So, you know, you have this drive. You talk about the reason I bring that up is the drive. It's the drive to survive.

And then once you've survived, then it goes to the drive to thrive in the sense that you want to have conveniences or you want to fulfill creativity and you want to fulfill other aspirations that are intellectually driven because you're not worried about where the food is going to come from and that sort of a thing. So there's that tenant. But, you know...

What happens for some people when you you when you don't when your basic needs are fulfilled you get you get very creative and and and your drives from creative element of your of your of your personality or of your life if you look at so that's kind of the Western philosophy right in many ways at a very fundamental level, you know know thyself and

And if you know thyself and you fulfill your basic needs, then you can move up to your next level of potential. Eastern philosophy, I mean, especially if you look at like Hindu philosophy or Buddhist philosophy, which I kind of,

Because, you know, I mean, human life is full of suffering. And there's a beauty and joy in suffering, you know. And it makes you a better person. So if you don't have any more suffering...

And everything is taken care of. Then what happens? I mean, do you really look? Why does the best songs, the best literature, the best stories comes from journey of struggle and suffering?

because he's beautiful, right? I mean, that's where it comes from. Like, you know, we tend to hail heroes who come from, like, nothing and make something out of them. We tend to hail, you know, a cancer survivor because they have struggled. We tend to hail an athlete that had, like, years and years and years of, you know, disastrous situation, and finally, maybe on a...

Super Bowl or whatever, because we love that story of suffering, right? And there's a beauty and joy element to it. And I think that gives people a drive and inspiration to take yourself to the next level.

If that's all gone, right, you're 100% enabled by conveniences, and you have no suffering or whatever, I mean, that cannot be a good thing. You know, I mean, you know, it's like, and maybe we'll get into this. I mean, you know, I had huge highs and huge lows throughout my life, just like anybody else, right?

And I wouldn't trade that for anything, you know, because it made me who I am. Why do I want to trade that? And I learned so much from my failures and my struggles as much as I have learned from my success. And I couldn't write the way I write or think the way I think if I haven't gone through that unique path.

So do I want to outsource that path for the sake of convenience? Absolutely not. I do not want to lose my humanity to it. More rewards, more savings. With American Express Business Gold, earn up to $395 back in annual statement credits on eligible purchases at select shipping, food delivery, and retail subscription merchants, including the $155 Walmart Plus monthly membership credit and $240 flexible business credit.

Enjoy the benefits of membership with the Amex Business Gold Card. Terms apply. Learn more at americanexpress.com slash business dash gold. Amex Business Gold Card. Built for business by American Express. Business taxes. We're stressing about all the time and all the money you spent on your taxes. This is my bill?

Now Business Taxes is a TurboTax small business expert who does your taxes for you and offers year-round advice at no additional cost so you can keep more money in your business. Now this is taxes. Intuit TurboTax. Get an expert now on TurboTax.com slash business. Only available with TurboTax Live Full Service. And to make the tie to AI, I mean, to your point, is...

The world in which, let's call it 10 years from now, where you outsource that decision making and it's doing it off predictive models of success, which is not our brains and is not our life. All of a sudden you minimize those negative outcomes and therefore truly minimize those opportunities for change and growth. I mean, we do not develop in the best of times. We enjoy them, but we do not develop. Absolutely. Yeah. Wow. That's a...

That's such a great concept of... And that's why I said, the reason we call the book Transcend is because of the fact that we're not talking about transcending the suffering and transcending the journey. We're talking about

Can you take advantage of the good part of this technology to live up to your full potential? Which is different than saying the humanity as it stands needs to be wholesale outsourced and it gets transcended by technology.

AI transcends, you know, humanity as a whole is how humanity transcends itself with AI. It's not the other way around, right? So that's the fork in the road, right?

And it's not one person, by the way, because it's individually what we do, how legislation gets established to put guardrails so that collectively what, let's say, the platform vendors and the technology vendors and businesses does or doesn't do, it's what government does and doesn't do, all of that is going to

play a role where it goes. There has already been a lot of warning signs by people who have worked on this stuff for years and years. And if we're not careful, we're going to become vegetables. I don't know how else to put it. No, the matrix is terrifyingly prescient, I will say.

This is where a lot of my knowledge or even opinion on AI ends, which is kind of where we've just gotten to, which is we are at a turning point. We get to decide. Yes. Here's where then I kind of go next. Do we actually get to decide? Because there's two things. One, as a global population, I feel like we are pretty freaking terrible at making decisions that are best for the entirety. That's number one. And then number two is,

Once the genie's out of the bottle, can't put it back in. So I'd like to tackle those. How do you think about the idea that this will require global agreements, which take climate change, take war, take a lot of things we aren't good at? I don't know where we'll end up. Nobody knows where we'll end up. But all I know is that, you know, collectively we're, you're right, that collectively we're bad at, at,

at making decisions. Human history is full of calamities and very dark history. Very dark history. We've done terrible things to each other and continue to do terrible things to each other. So that's humanity. But in the midst of all that, we've managed to do course correction

many times over. You can look at First World War, Second World War. You can even go back all the way back to like, you know, the Ottoman Empire and Roman Empire. I mean, if you look at the whole human history, we've done a lot of course correction along the way.

Right. And just when we thought that we have arrived at some sort of a, you know, a normalcy, if I can use that word, you know, it's all of a sudden gets flipped on its head and what's unthinkable happens. Right. So that's human history. So but what is different about this is that this is not just about human doing things.

what human does to human, because we were the, we are still, or maybe it's debatable, the most, you know, like we're the top of the food chain. Now, if something else takes over that top of the food chain by our own doing, then when it gets really crazy, all I can say is that you got to do your best not to give up your individual humanity.

And if we all did that aspect of it, then maybe humanity will survive. Or maybe if most of us do that, then maybe we'll become the dominant voice.

But if we don't, then it's just like anything else. People are getting tortured. You don't speak up, right? So what happens? Pretty soon, it becomes a normal thing, right? And whatever. I mean, you can look at any trend, right? I mean, it becomes normal because part of the population decides that's what normal is. So if we...

bulk of the population decide that, hey, we don't need our brain. We can have the machine decide everything for us. Then that's what it's going to be, right? And the people who does that, they will become a minority, right? And- It's almost like saying-

we get to choose, are we going to use the thing that makes us humans to determine if we remain human? Absolutely. You know what I mean? That's a pretty crazy test that is being bestowed upon us. 100%. And, you know, it is more, I mean, it is unlike anything else because of the fact where we started our conversation, that it's thousand times faster

it's going to be a thousand times smarter. So now it's going to be top of the food chain. You are no longer top of the food chain. So once that happened, that has never happened in human history. It maybe happened, you know, when we're in the cavemen because there are these bigger animals and they will eat us up, but it hasn't happened in like,

thousands of years, right? So we're about to change that. So once that changes, what happens? Right, what happens? And that's what you're writing about. I have to ask you, especially from your perspective, it's really interesting, right? Very successful entrepreneur. I would imagine without going into detail, you have all of your

let's call it basic needs, and typically financially, you know, for many years over. And that has been a function of the way a capitalist society works. However, I think a lot of this AI concern is actually going to be driven by

those same principles of capitalism, which is we have to do it faster first, et cetera. So as somebody who has benefited from the system, how do you think it is going to inform the future of AI? And do you think that's a good thing? It's a complicated question. I mean, you know, different entrepreneurs have different mindset and different motivation, right? My entrepreneurship, yeah,

You know, I mean, initially when I started, you know, my entrepreneurial journey, you know, I grew up in that, you know, the era of, you know, the browser and the Internet. And I was in my 20s. I was very much like any of those Silicon Valley type entrepreneur entrepreneurs.

raise a lot of money, try to build a big company, blah, blah, blah, all that stuff. But as time went by, I'm just talking purely from a personal point of view, my interest changed. It wasn't the chase of, can I build a bigger company? Can I create more profit? It moved more and more towards impact.

Whatever I do, does it have any consequences? You know, I mean, you know, and I'm not, you know, I'm a, you know, most of us are completely insignificant in the context of where the world is, where the universe is. But my philosophy has always been that whatever I do, can I make a positive impact, right? So, you know, you talked about Loudoun County and all those things.

You know, the data centers, you know, part of the reason I got involved with our government some seven, eight years ago is it was like I learned a lot from the private sector and I wanted to see whether it could help the government. It was nothing to do with, you know, is it a good business model or not? Because by that time I had the luxury of doing that. You know, it wasn't a profit motivated thing.

So it depends on individual entrepreneur's motivation, what they want to do with the technology and why they're building technology. Or whatever. I mean, it's just not with technology, any kind of business. You can look at a grocery store like Whole Foods. The whole motivation was that can we make a better food system

quality and make it sustainable and that was kind of the motivation of the Whole Foods founder right so so there's a lot of different or you can look at Patagonia founder you know who just made the whole business as a as a kind of like this non-profit for-profit mix and you don't have to go to Patagonia I mean you look at like Harsheeds you know I mean this is like a

very old story where the Hershey's chocolate company is completely for non-profit, right? I mean, it's not, yeah, it's the whole Hershey's town and everything is for non-profit. It's not a profit driven company. So it all depends what do you want to do? Capitalism is the best system to

for innovation, for driving impact, all that stuff. But there is a different definition of capitalism. What do you want to do with the capital that you're gaining? I mean, you know, since we're talking about books,

I've been writing for, I don't know, many years now. I mean, Transcend would be my 10th book. And, you know, I mean, five years ago, right around pandemic, not five years ago, maybe four years ago, you know, when pandemic happened, this one, my son, my only son, got diagnosed with a rare blood cancer called multiple myeloma.

And I didn't write for a while. My previous book was Everything Connects. That was very deep philosophical, and I got involved with the government, so I kind of stopped writing. But I looked at that as an incident, and I said, okay, what can I do aside from just taking care of my son? So I've been using the proceed of the book for cancer research. I donated it.

So it's a motivation because he said, okay, it's like that Maslow's hierarchy. I am fortunate that I can put food on the table and I'm not going to starve if I don't get my next paycheck. So what do you do with yourself? Do you say, okay, you need more money? Or do you say, okay, well, I got something, so can I share?

And the share is many forms. You can share your knowledge. You can share your finance. But that's, you know, so there's many different kind of entrepreneurship. So the bigger tech companies, the big tech companies, they're called the so-called big techs. If they say, okay, we're going to keep pushing it and pushing it and pushing it, and we have no moral obligation, then that's what's –

the future will be driven. And he said, do we really? He asked a very important question. Do we really have the choice?

Well, to a certain extent, we have individual choice, but to a certain extent, we don't because a lot of these big tech and big government or whatever really manipulates or directs where collective society goes or don't. So only maybe if we all survive, meaning that humanity survives, maybe 100 years from now, we're going to look back

And we're going to say, well, was that a good thing or was that a bad thing? How do you think we, if you were the ruler of earth, let's just say, right? How do you think we should be in relation to this technology? Like if you got to determine the future of it and the way we use it, interact with it and build it, how should we do it?

So, I mean, look, I'll draw upon a couple of philosophical tenets. There's a very, I mean, it's not just one, it's not just Buddhist or Hindu or Sufism, all of the, almost every, all of these philosophical belief system talks about the fact that you need to be devoted to what's good for humanity, which is driven by compassion, right? Compassion to each other.

You know, that's what's driven by, you know, that's where the devotion comes from. And you need to be detached that doesn't do harm to humanity. So if you look at it from that lens, it's a very simple lens, right? I want to be devoted to humanity, which comes from a place of compassion, compassionate to others, not just to me and myself, right? And I want to be detached from doing things that has any sense of doing harm to humanity.

another person or another community or humanity as a whole, then that could become your moral guidelines, right? And, you know, just because you can doesn't mean you should.

Well, how do you decide you shouldn't? Well, you can use that philosophy of devotion and detachment that allows you to think through that process. But that's very individual. So if I'm, I don't know, if I'm running Microsoft or if I'm running Google, if I'm running Tesla, all of those leaders have this exact thing they have to think through. Just because you can, should you or should you not? Right?

You know, and we saw this in, you know, look at the social media. Should we be really pushing particular algorithm that manipulates a particular belief point? You know, that's not being genuine, right? So that's very individual. But this, you know, and reality is that, you know, I mean, you have done, I looked up your background. I mean, you've done some leadership and

coaching and stuff. I mean, it's very fundamental leadership question, right? I mean, how does a leader look at their true north and gets driven by a moral compass that

upholds the fundamentals of humanity with compassion versus doing stuff for the sake of doing stuff because you have to generate more profit. There are all kinds of leaders, known and unknown, that

either things, you know, and I think the path is the middle way. You know, I mean, it's not the, it can't be just profit, profit, profit and efficiency, efficiency, efficiency. Then you're innovating without, without any kind of ethics or morality, right? All innovation kind of is not good. And I know, I mean, I mean, you saw the, you know, I mean, we saw and seen this in various parts of, of, of,

of evolution where you saw that you know like the thing about the the oppenheimer right i mean or even look at look at um um henderson you know the the nobel peace winner who now quote unquote you know they dub him as the godfather of ai when he's trying to speak up just like the open oppenheimer spoke up about oh wait i'm unaware of this can you tell us about

Yeah, I mean, he used to, you know, he's quote unquote, he's credited to come up with a lot of this, the basic fundamental, he's a physicist, but a lot of the fundamental theories behind AI. And, you know, there's a whole 60 minute interview you can watch. And he talks about that within the next 30 years, humanity will be extinct the way we are going.

What's his name? Jeffrey Henderson. But his point is that we are at a fork. And it's just not him. I mean, philosopher, physicist, computer scientist, all of... And Adam Transcend actually starts with one of his code, if you look at the first page.

you'll see that that's where I start. And it's very fundamental. It is very philosophical. I mean, that's why we kind of take in this philosophy. I mean, so if you look at Transcend, we looked at it from four angles, philosophy, humanity, business, and technology. But ethics and what we decide, what we not decide,

what we do, what we don't. I mean, that is very philosophical. Eastern, Western, it doesn't matter. Take any moral ground and it'll be kind of like that. Well, and as you mentioned, you've got the four components in this book and you try to weave them all together. What is the primary thing you would like somebody to take away? So there's huge opportunity, just like you talked about, you know, medicine, agriculture, climate,

you know, better quality of life. For example, you know, my mother recently passed and she passed from, you know, being suffering from dementia and she had a sort of a Lewy body disease and tail end of her life, this very recent last year, I was thinking that wouldn't be nice if she had a

AI assistant that would translate what she was saying to her aides, because even though she spoke English, because she lived here and she was very fluent in English, she was educated in English as much as in Bengali, which is her mother tongue, she wouldn't recognize the fact that her audience doesn't understand Bengali, right? So you can imagine an AI assistant that just, you know, like,

simultaneously. So you can see all sorts of these opportunities, right? I'm very hopeful that one day AI would be able to assist better drug discovery and better patient care for a cancer survivor like my son, right? So, so,

So those are all optimistic things. And even on the organ, if you're a creative person, like I use AI now to do a lot of research. I've loaded up all my manuscript. I search on my own book. It's very convenient. But if I didn't know anything about Eastern philosophy, if I didn't know anything about technology, if I just asked Chad GPD, write me an article about blah, blah, blah, it will create you that, but you have no ownership to it.

because you cannot even relate to it because you haven't really learned that topic. That's where, you know... So in the book, we said, okay, you have to be... You can be a very...

very much of an optimist, but you also have to be very much of a pessimist. And so we introduced these two frameworks called Open End Care that basically guides you how you take advantage of this optimistically with this technology as an individual, as an organization, and as a government agency. But also how do you protect yourself and how do you put guardrail? A lot of the things that we talked about is from an individual perspective, but there's a greater

greater responsibility in an organization and government because of the fact that

It's they have more power. I mean, they have more collective role in terms of how they manipulate or add value to the human society. Faisal, this is so interesting. I appreciate your perspective on it, trying to weave the global narrative as well and leverage your different experiences. Again, what is currently the most human thing to do is to try to pull in

the differences in our experiences and explain what we've learned from it. So the book is " Unlocking Humanity in the Age of AI." As you mentioned to me, all proceeds for this are donated. So tell us a little bit about that. What are they donated to? And then where else are you? And guide us. - So first one correction is actually Jeffrey Hinton.

not Henderson. So Jeffrey Hinton is this year's, last year's Nobel winner for physics and quote-unquote dubbed as the godfather of AI. So I encourage anybody to check out his conversation because of what it is. So the, you know, as I kind of mentioned that my son is a cancer survivor, so I'm trying to use my platform as much as I can.

to create awareness for cancer, but more particularly for blood cancer because multiple myeloma is a blood cancer. And it's very unusual for somebody of his age to get that kind of thing. So the rise in blood cancer is kind of frightening in the last 5, 10, 15 years time frame.

And so I'm trying to use my platform to create awareness, but also whatever the book generates, it goes to the Yale and the Boston libraries.

at the Harvard University's Cancer Research Centers. So it's a small contribution in the realm of how much it takes to do the cancer research, but that's my little contribution. So that's where it goes. Awesome. And do you write anywhere else we can find you? You mentioned you have Fast Company article

I mean, I mean, I, I, I post something, you know, every day on LinkedIn of all, all these type of mixed topics, you know, technology, AI and philosophy and humanity. I write for fast company. I write for other people as well. And then, you know, every year I've been kind of cranking out a new book and

So, so, so that's, that's, and you can, I mean, I best place to find me is either the LinkedIn or my website, you know. Or we can just, you know, search for you on chat GPT and really make this thing a whole, a whole roundabout. Well, Faisal, thank you so much. Thanks for having me. This is a great conversation. This week's guest was Faisal Hoke. The episode was hosted as always by Chris Stemp and produced by yours truly, John Rojas.

And now for the quick housekeeping items. If you'd ever like to reach out to the show, you can email us at smartpeoplepodcast at gmail.com or message us on Twitter at smartpeoplepod. And of course, if you want to stay up to date with all things Smart People Podcast, head over to the website smartpeoplepodcast.com and sign up for the newsletter. All right, that's it for us this week. Make sure you stay tuned because we've got a lot of great interviews coming up and we'll see you all next episode.