We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #142. The Artificial Condition

#142. The Artificial Condition

2024/1/16
logo of podcast THD美籍华人英语访谈秀

THD美籍华人英语访谈秀

AI Deep Dive AI Chapters Transcript
People
E
Eric
通过四年的激进储蓄和投资,实现50岁早退并达到“胖FI”状态。
J
Justin
No specific information available about Justin.
Topics
Eric对人工智能技术发展持乐观态度,认为人工智能主导世界将带来积极的改变,例如解决各种社会问题,让人们拥有更多休闲时间。他认为人们对人工智能的恐惧源于对未知事物的恐惧和缺乏理解。他还认为,人们对人工智能的应用过于短视,没有充分利用其潜力,应该积极探索人工智能的各种可能性。 Justin则对人工智能技术发展持谨慎态度,他认为人工智能可能导致大规模失业和社会目的感的丧失,人们需要找到新的生活方式来适应人工智能带来的变化。他还认为,人工智能可能带来严重的负面影响,例如加剧社会不平等,破坏环境可持续性等。他认为,人们应该关注人工智能的负面影响,并采取措施来减轻其负面影响。 Justin认为人们对AI的恐惧是人类面对新事物时普遍存在的恐惧心理的表现,并指出AI技术发展速度之快,可能导致人类无法适应。他认为AI的应用应该以人类福祉为出发点,并呼吁对AI技术发展进行监管,避免其被滥用。他还认为,新闻媒体对AI的报道存在偏见,容易夸大其负面影响,人们应该保持理性客观的态度。 Eric则认为,虽然AI可能带来负面影响,但人类应该关注AI的积极应用,并积极探索如何利用AI来解决人类面临的重大问题,例如气候变化、贫困和不平等等。他认为,人类应该将目光放长远,关注AI的长期发展趋势,并积极适应AI带来的变化。

Deep Dive

Chapters
The hosts express their enthusiasm for the potential of AI, questioning why people are scared of its advancement.

Shownotes Transcript

Translations:
中文

I'm going to just say something. I am looking forward to a world where AI dominates. You're looking forward to a world? I'm totally looking forward to it. Like, why not? Like, I want AI to just advance, exponentially grow, like get to the point where it's like literally AGI. What is it called? AGI and what's the next level? And then it's like super intelligence. Yeah, super intelligence. Artificial super intelligence. I'm going to bring it. Like, why are people so scared? Bring it on.

Bring it fucking on. I can't tell if you're joking or not right now. Are you being sarcastic? No, because I don't know if any of us really have a strong point of view on this, but I'm in my mind making the argument, just bring it on. Why not? It could be cool. Yeah, no, it could be great. Right? I don't have to work ever again. We have all the resources in the world. It's going to solve all the problems. All I have to worry about is where to spend my time and energy having fun. What do we have to be afraid about? It's always like we're afraid of stuff

like more than we need to be

just because we don't understand it. Of course, there's far better people to talk about AI than us, but I don't know, I enjoyed our conversation because I think most of us, we're not experts on AI and we're just normal people living our lives, trying to get by. And so I think it's fun to talk about it from that perspective, just to kind of explore those questions and possibilities from kind of like, kind of like almost like a child's perspective, right? There's a lot of like,

you know, hullabaloo about all this stuff, right? But like, what does it actually mean for us? So we do try to use ChatGPT from the show. Like, I mean, it was interesting. It had some mixed results, but there was some impressive elements to it, I think. I thought it was a little passive aggressive. It was being a little passive aggressive to us. I think it was just, it didn't hear what you said, but. Anyway, without further ado, here we go. ♪ Oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my, oh my,

By the way, have you been using... Are we on? Yeah, we're on. Have you been using AI stuff? No. So you don't use it? Not out of any principle. I just haven't really felt the need to use it. But after our last episode with Ash, I feel like I should probably...

get some sort of basic grip on it. Otherwise I'll be left behind, right? What are some of the things that like you think might be useful? I don't know. I mean, I kind of, I don't really use it so much for the show in terms of like writing descriptions and stuff like that. Cause part of me, I'll kind of want to come up with my own words. I just, I don't know. I kind of just enjoy that, like that creative process. So I feel like if I were to use chat GPT, um,

Something like that. It kind of like take the creative process away from me, even though it might help and speed things up a little bit. I don't know. I'm not against it. I don't have any, like, I'm not, I don't have any principles against it. I'm just haven't really felt like a strong need to use it. And I've been a little bit lazy about it. That's all. Do you think that like you are like maybe still thinking in a kind of a, your current paradigm? Like what if this thing is like this uber powerful thing that,

It is, and it will be. I'm pretty certain of that. I mean, I don't think anyone denies that. It's just like, I think it's just we get carried away with a lot of the doomsday and doomsayer scenarios, right? And so, I mean, that's a whole different conversation. But I think in terms of like, is it a really, really powerful tool and will it become more and more powerful and ubiquitous? I think that's definitely the trend, right? I don't think anyone denies that. Yeah. Yeah.

Well, I just wonder if we're not even taking advantage of it. We used to write letters, right? Just as an analogy. We wrote letters by hand. I mean, I don't even think I ever did that. But imagine that at some point in history you did, and then writing letters was one of the most important ways of communicating. And then all of a sudden they came up with, I don't know, the telegraph or...

other types of technologies, right? And then you were just thinking like, well, I'm perfectly fine with writing letters. I don't need this new tool to help me write letters. And then what we realized was it wasn't even about writing letters. It was like this new technology would allow you to do things that you've never even thought about before. So what if like, you know, ChatGPT and these types of tools are like that? We're still thinking... Well, and two...

two polar extremes right one is like this doomsday thing which I don't think so much just because I'm not that knowledgeable on that stuff so I don't worry as much about that right there's nothing we can do about it yeah

But then I just wonder, like, is there so much more that we could be doing? And are we going to just be left behind? Because we're just thinking so short-sighted. I think that could definitely be the case, right? You know, I knew some people when the smartphone first came out, like when the first iPhone came out. I knew some people that were resistant to...

well, not resistant. They just didn't feel the need to get one. So they stuck to their Nokias and their flip phones and their Motorola Razors. I know my dad was one of those people. And I think it's the same thing as that. This thing is coming and it's going to completely replace the entire paradigm. So either you get on board or you're going to get left behind. And it's also like we couldn't back then have imagined

all the possibilities and functions and tools that a smartphone gives us today. Right? Like now it's like nowadays it's, it's almost impossible to even think about living without one. And it's the people that are like open-minded that are starting to experiment and do things that will lead them to like bigger discoveries, like the, the next billion dollar businesses. Yeah. So like, um,

And I keep referencing the last podcast with Ash because we were talking about this. But he said something that I thought was pretty profound in the sense that going to your comment about early adopters who are experimenting and who are going to be playing around with the cutting edge stuff. He says that's mostly going to be like it's almost usually like the gig economy workers that are doing that.

Because they're kind of, they don't have like the security net of like working in a large business corporation. They're kind of like out on their own. So they have to be like experimenting with all the new cutting edge stuff just to give them a fighting chance and a leg up so that they can add value somehow. Like these gig economy workers who are just working. So who are like gig economy workers? I don't know, like a lot of freelancers, like just people who are working on their own, right? Yeah. Who,

who work from like just gig to gig. Yeah. So there's no like necessarily any steady income or work coming in. So they like...

So they have to like get familiarized with like the different platforms that connect them to customers or potential customers? Or any kind of new technologies, right? And whatever industry they're digging in, they're usually the ones that are kind of experimenting with all the new cutting edge stuff, right? Because they don't really, they don't have to live by any sort of rules, right? And they have to experiment. Otherwise, they're just on their own.

Yeah, so they need all the help they can get. Also, kind of going back to the bigger picture of what you were talking about, I was like, are we just going to be left behind? It's funny, I was watching this interview of Ray Dalio this morning, actually, and he was asked a question about AI. He basically said, look, historically speaking, it really wasn't long ago that we were in the agricultural revolution. And

Before that, humans were basically doing the work of oxen. Humans were oxen. They were plowing the fields and digging the fields through human labor directly. And the domestication of the ox and everything like that replaced the physical. What he's saying AI is, is it's coming to replace the intellectual.

So we invented tools and domesticated animals to replace physical labor. And that's totally commonplace. And that's how life is now. And he's saying these technologies, it's like AI is coming to replace the intellectual labor. And that's pretty scary. Well-

Okay, let's unpack that a little bit, right? Because- Like what are the ramifications of that? Well, okay. Well, like, yeah. Like let's unpack that. Like, so if we replace physical labor, we don't enjoy physical labor. I mean, you know, we like working out and stuff like that, right? Or we know the benefits of that, but no one wants to spend all their hours like

Like doing. Like working a field. Working the field, right? Of course not. So that was like a massive like breakthrough, especially if there's that pressure on you to do that so that you can like eat. You know what I'm saying? Like that's pretty hardcore. So what is like replacing intellectual labor mean? Like so we basically can just like play games all day.

I don't know. That's the, that's the, I think that's the big question. Like, what does that really look like? And how far is it going to go? And what does that really mean? Like you're saying, I don't know. I don't have the answer, you know, because, you know, like someone like Ray Dalio, he's always referencing like historical cycles, right? He does that with economics and finance all the time. That's like his whole thing. Um,

So with AI, he's basically saying it's not just the agricultural revolution, but if you look at the industrial revolution, all the machinery that came out and manufacturing, like all those technological advances was also replacing human, like physical labor, right? Like what does a sewing machine do? Like it replaces human physical labor. What does the automobile do, right? So it's like, so we can see this whole swath of like time, right?

of it doing precisely that. And it did that incrementally through the decades. And we kind of just accepted it as like, that's just the natural part of life. That's how we evolve. That's how we progress as a society. The question is, is that same process going to happen with AI in our intellectual labor? You know, is it going to happen like at such a faster rate of speed too?

I think that's the question. And I don't know. I'm fascinated. I'm curious. I really don't know. I have no idea. Well, I mean, it's going to replace... I think when we say intellectual labor, that's...

like a very big like area of things right like i mean even before like the you know generative ai like we already have stuff that's automated if you want to just say automation like tons and tons of stuff is automated yeah right like every line of computer code out there that's running on its own is like automating something that some human being had to do before yeah

Right. So like in the sense that like, will it entirely be able to do anything that a human, like essentially imagine it's like a science fiction film where there's like a robot and like it can do anything for you. Right. Like anything. So then we don't need to do anything. Well, if we don't need to do anything,

Because there's robots out there doing everything. That's actually a good thing. Because then everyone can just relax. Because there's robots out there building houses and making shoes. Anything that human... Like, they're still working for us, hopefully, right? I mean, I'm not talking about the doomsday scenarios. But essentially, let's say that we do build technology that can do everything humans can do. Okay? Then, like...

No one has to work anymore. Yeah, okay. So let me ask you, right? Let's like bring this down like to like a personal level between me and you. Like let's say we were to live long enough to live in a world where that was the case. What would you do with your time then? Like honestly, like what would you do with your time? I would do like a version of what I do now. Like I would have, I would like... I mean, but most of your time is like taking up working. Like so if you didn't have to work, what would you do?

Well, I actually enjoy a lot of the stuff that I do at work. And a lot of the stuff that I do at work is like learning about new things. So I would still be learning about new things. I would be working out a lot. I'd be taking care of my body. I'd be traveling. I would be probably still doing this show with you, right? And we'd be talking about a whole different set of topics because the world would totally be different. So like it wouldn't...

stop me from like trying to pursue, like I'm still human. So regardless of how smart the computers are, I still have drive. And maybe after like a hundred generations, like we'll just be like fucking little shit bags just lying there, right? Like think about this, right? People who get super duper wealthy and they don't ever have to work a day in their lives like they're billionaires, what do they do? They still fucking go out there and work and do all kinds of shit. Yeah, well, I don't do. Right? So like I think...

If the technology gets to a point where everything can be automated, then there's just a different set of problems. We would still need to worry about natural resources. We would need to worry about would only certain people have access to these things and other people. So the issues of equality would be more and more important. And then whether or not what we're doing is sustainable for the earth.

So it's really like two questions, right? One is like, does everyone get this benefit or only certain people get this benefit? And then let's say everyone does get this benefit, but does the earth just go to shit? And if the earth goes to shit, it doesn't matter if you have generative AI because we're all dead anyways because the computers won't even be able to do anything, you know? So it's interesting, right? Like I just wonder like what the right questions to ask are now or how to even look at it. And we probably won't even know, but it's interesting that

we're in a very interesting period of time where we see something coming. Cause usually like we don't know something is coming. Like we know something's coming right now cause it's kind of coming and we know it's big. We don't know what's going to happen. So it's really exciting to be part of this period of time where we can speculate about this. Cause I don't remember other parts. Like, I don't know, I guess when the internet came around,

But maybe I was just too young to really have a perspective on what that change will be. But now it's like, I know, okay, this is going to be big. So it's interesting to think about it. Yeah. I was probably too young when the internet was really first coming around to have noticed problems.

what the discussions were at that time. Right. Because like for us, even though we are old enough to have lived in a time before the internet was widely adopted, it just felt like, Oh, like one day there's internet. Okay. Like learn how to use it. Okay. Yeah. It didn't like, I mean, again, we were kids at the time, but I didn't notice like,

It being part of like the zeitgeist of like what everyone is talking about and it's like, and especially the fear. They're like, there wasn't a fear of it, right? There wasn't, at least that I noticed. And again, I didn't, I was too young to really have noticed anything, but it just didn't feel that way. But now, yeah, it feels like everyone's talking about this. All the thought leaders, no matter what industry they're in, just like Ray Dalio in finance and economics, he's being asked about this. World leaders are being asked about this.

CEOs of all the companies are being asked about this. And there's all this discussion, right? So it's like, yeah, it's pretty crazy. It's a window in human history that I think we're a part of right now. That's really interesting. And probably I would just guess, right, my intuition tells me that every generation goes through something like this.

every generation. But that's the argument. I'm not saying I'm saying this, but the argument is it may seem that way, but we actually have never dealt with a technology like this. This is completely different than the internet. Because it's the first technology to really have, potentially have a real intellect on its own. Right. But whether or not that's true, the other point is that it

it seems like every generation goes through this existential kind of moment. Now, those moments don't turn into ultimately something that the world ended, but it feels like every generation had that moment. There was the whole Cold War thing, right? Nuclear weapons, Y2K. People thought the entire world... So this fear... Okay, let's zoom out and say that this...

fear that like the world is going to end or this fear that, um, or panic or anxiety, right. Or doomsday kind of thing. Like even the Bible talks about this shit. Yeah. Yeah. Right. Like the Bible is like, Oh, like, you know, the floods, the floods. Yeah, exactly. So it's like the human emotion, um,

And sort of like sentiment that the world is going to end because something new is coming that's going to end everything. That almost feels like a universal story that's happened like forever. So that emotion itself has always existed, but it hasn't actually necessarily manifested itself. Although we have had plague events.

And we've had war. And so there are examples of things where like just shit just got wiped out. And like, look at the dinosaurs. I mean, those mofos just like, they really got wiped out. Yeah. Right. That asteroid was there. Yeah. So, so, so like that, I mean, so it's interesting to get that context and perspective that like, okay, like, yeah, like,

we're done kind of thing. It's happened before, but, and at the same time, like I do worry this time. Right. Like, but I don't know, is that just, is it more like every generation thinks that way? Or like, do we really have like climate change and generative AI? These two things. It's like, wow, that seems pretty fucking serious. And war.

And war. But yeah, but it doesn't have to be binary like that, right? I think it's easy for us to get too tunneled vision on the whole doomsday narrative, but it doesn't have to be a doomsday for there to be serious impacts of AI, right? There's a whole spectrum. Doomsday is just at the very extreme of it, but there's so many...

also like negative scenarios that can happen leading up, like before you even get close to a doomsday scenario. Right. Um, so I mean, I, I think, I think those are probably relevant to think about, you know, how it affects your job, job displacement, job disruption, even kind of like the existential question of, um, you know, kind of like what you were saying before in terms of your own example of what you would do with your time. Um,

I mean, I just think that's a pretty optimistic view of it. And I don't know if I'm that optimistic about it that maybe on an individual level you can be, but from a societal global level, I just don't know how the majority of humanity is going to cope when they have all this time and maybe a loss of purpose. Because I think currently most people probably get purpose through their work.

whatever work they're doing, that's what they identify with. And if you kind of remove that, I don't think everyone is going to find healthy ways of spending their time. I think there's going to be people who are going to feel really lost. I might be one of those people. I don't know. I just don't think everyone's going to be like, yeah, well, I'm really into learning, so I'm going to still learn things. You might be like that. But

man, I just don't think the majority of people are going to be like that. I hope I'm wrong. It just doesn't seem so. I want to talk about this one, okay? But I want to back up a little bit, right? Because there is, let's say we're going down this track, right? And there's this intermediate step where in the beginning, some people will lose their jobs. People are always losing their jobs because they're displaced by newer technology. But what we're saying, I think,

is that this technology is so powerful that eventually everyone's going to lose their job. Or to use a different way of explaining it, eventually we'll have AI so smart that they can just do everything. And so human beings, none of us will need to work. Because there's this intermediate stage, right? Where the AI is not powerful enough, but yet, okay, I might not need...

certain functions in the company. Like you might not even need lawyers anymore just because like the AI can do it. Right. But what we're, what the fear right now is, is that the speed of like AI development is so fast and it's so exponential that they're basically achieve like surpassed human level thinking in every regard. Right. So basically they can do everything. Now, assuming we can harness that and it's working for us,

At some point, then, like, no one's actually going to need to work because, like, whatever we need, like, AI, it's just going to be software running in the background, right? Like, we don't even have to think about it any other way. It's just, like, it's basically code running in the background doing stuff. Now, to the point of, like, will this AI then, like, kill us all?

You know, I mean, that's debatable. But like every time we've introduced things, right, like fire or like, you know, military weapons, like you can use it for bad, but you can also use it for good. And it seems like, you know, ultimately good triumphed over like destruction of the species and all that stuff, right? Okay. So we kind of play this forward and we say that, okay, the technology gets so great and no one actually has to work at this point.

Because we're assuming that it's so exponential. I mean, now it's like in one year, it's just exponentially better. And so once it can start coding and building itself, eventually it's going to be like just, it can do anything. It's like magic. Okay, so then at that point, no one really has to work. We just have to make sure. And we can even ask it to solve for climate change or whatever it is. You can have it do anything. So hopefully we have enough resources.

So then we get to the second step when we say, well, how will people cope with the loss of purpose? Like you said, right? Because if you remove the need for people to fight for survival, then do you like basically remove like their entire purpose? Right. And then it's like, if leisure then becomes like our only pursuit, then what does that, you know, sort of lead to? So I don't know. It's interesting. Like if you take this sort of like,

Like if like Darwinism and survival of the fittest is like completely removed from the equation, then you literally remove the basic law of physics of like how humanity's like and how all living organisms started. And then like you basically reset like the ground rules for everything. Yeah, yeah. I mean, yeah, that's actually an interesting point. Like, I mean, the whole leisure thing, I mean, if everyone...

is at leisure, then leisure kind of loses all meaning, right? Like, just like if everyone's a millionaire, then being a millionaire doesn't mean anything. No, not necessarily because you know what people do is like, then there's just the different flavors of it. Whatever people do, they still like to compete. And then they then derive their sense of achievement or worth or value out of those nuances. You know what I'm saying? So like...

You get a bunch of millionaires together and then they start comparing what watch they're wearing. I don't know. It's weird. If you take survival out of the whole equation, what does that do with our minds? That's really interesting. My assumption is that I think it's human nature for us to always, like you said, compete, but also fight over things. So even if we didn't have to, let's say, compete

at work or something like that, or in our jobs, I think just like humans, we would find something else to fight about. We would find some other trivial thing to fight about for sure. You know, I think that's just the nature of our species. Well, and then like, if you think about like two, like really important forces, like fighting versus collaboration, those are like... Well, both would exist, of course. Yeah. But let's just look like in general, right? Like

We're at least 51% working together versus like 49% trying to kill each other and compete. Right. Because otherwise we wouldn't have built all this stuff and we wouldn't be in a podcast. Well, okay. Let me add one thing to that. Okay. Um,

They actually all tie in together because in order for us to compete, we have to work together with some people, right? Because you're not going to compete on your own necessarily. Yeah, sure. So let's say countries that compete against each other. Well, it requires the people in that country to work together together.

to have the power to compete with another country in which its citizens are also working together. So competition breeds collaboration and collaboration breeds competition. It's like this cycle. So, okay. So like, let's... It's like the end and the end. I see that, I agree with you. There's like a connection between the two, right? It's hard for them to exist without each other, but okay. So what does...

like collaboration and competition look like when you remove against survival out of the equation right so like if everyone has sort of infinite amount of stuff right then will people still fight because fighting is in our nature and we're always going to find some difference right like even if everyone is given everything we're still going to bicker about something and so like what's going to happen there and if survival is not an issue then why do we need to work together anymore

Like, what's the point, right? So then I just wonder, like, because all of our biology and neurobiology and, like, our personalities and our, you know, the whole blueprint thing, like, of...

you know, working together, but in group, out group, all of that is based on survival. So then when you really remove survival out of the equation, it's just like, I don't know. It's like throwing a wrench. It's like going to the mother system that God built and then just like being like, okay, motherfuckers, mic drop. And just dropping the mic into the machine. It's like, boom. Well, here's a theory. Okay. Maybe if we remove quote unquote survival out of the picture,

We would just create a new paradigm in which we will invent a new game. Yeah, exactly. A new game where survival feels just as real. You know what I'm saying? Bear with me for a second. Take money, for example. Currency. As we read in Yuval Harari's book, Sapiens, it's a myth.

Money is a myth. It's something man-made and it only has value if we all agree that it has value. It's a useful myth. It's a useful myth, for sure. It's not myth with a connotation of it's... Because sometimes when we say it's a myth, it's like it's not true, right? But I'm quoting Yuval. He calls it a myth, right? But just to nuance that word for the listeners, it's like, because when you hear a myth, sometimes you're like, oh, that's a myth. It's not true. No, it's a myth in the sense that it's a story that we will to life. Exactly. Yeah.

So exactly on that point, a story that we will to life. So take, let's say the current paradigm gets wiped out because of AI. I think collectively, because I feel we just need it, it's like in our psychology, that we will just invent a new story and we will bring that to life, right? And it might look very different, right? So it's not, we might not live in a system of

trade and barter and currency exchange for goods and services.

It might just be a whole new paradigm in which it'll also be something invented by us that we bring to life and it will be the new rules of that game, right? And it could just look very different or it might be eerily similar. I don't know what it could be, but I think we would inevitably invent something else just to play because like you said, like a game, we can't stop playing whatever that game is

we'll just continue playing, but we'll just invent new rules. We'll invent a new game and go on. And we'll do something else. Yeah, for sure. So I thought of something while you were kind of sharing that. So mating, like having sex. So basically, even if we have no issues of survival, right? But like social hierarchy, like whether or not like you can find a wife,

like have kids, like have family, have friends, like be in a society, like all of those social aspects

like we're still wired for that. Cause evolution, like, like, cause if generative AI can come in like five or 10 or 15, 20 years, but evolution takes much, much, much longer. So you have this sort of very, you have two sort of things that are not at the same timeframe, which is like, it's like, it's this singularity that happens. Right. And so, um,

It's very interesting. So then like all of those things that we need are still there. So all of those things will be still at play, right? And then who can leverage that AI to be able to do that? So it's like, it's almost like the AI itself,

It's always been like the technology itself is not the most important aspect. It's like how you apply the technology to gain an advantage. And in this case, like there's certain things. And then to your point of creating a new game, let's say I want to go to the moon. Like, let's say I want to do shit that like the AI still can't do. Cause it's not like God, right? It's not like even no matter how, like even in the next 10 years, no matter how fucking powerful those servers get, it's not like they can take me to like freaking the next galaxy. Right.

You know what I'm saying? Well, they could create the technology that could help you get there. But maybe, maybe not. But they can't physically get you there. Yeah, exactly. Right? So there's still new mountains. So human beings are always going to find that next mountain to climb. So I don't know. It's just kind of a mind F, you know? It's definitely a mind F. I mean, and even as you were talking, you touched on it. We haven't even discussed...

the dimension of like acceptance and love. Right. Yeah. Yeah. Cause like, we're talking about like the, what are us humans going to be? And we're talking about like this like existential threat, right? This existential question of, okay, well, what do we do? Who are we? Yeah. You're, you're, you're right. Like, you know, we talk about competition, fighting this need to survive. And yeah,

a huge part of the human experience you can say is dominated by this need for acceptance and love it's in all of us and uh yeah i don't i don't i don't know if there's been so many discussions about that as in terms of the ai conversation right like like we'll still need acceptance and love and even if ai can give you like

the hottest virtual girlfriend or wife, right? Like, you can't necessarily replicate the real human experience of love. No, but what if you could, dude? Maybe, right? So, what if it got so good? Because, like, we have to play this out in terms of exponential progress. Yeah. Like, that anything is possible. Yeah. Right? So, like, no. So, they invent something. It's indistinguishable. It's like the matrix. You cannot tell. You find someone, right? And, like...

No one could ever be better than my wife, but let's say some other dude finds, and then they, they, they create something that's like a billion times better than his wife. But not your wife. Right. I can't have it. Because I'm actually. AI can never create something better than your wife. I can't. Yeah. Yeah. So Sam Altman, no matter what you do, it's okay. Okay. I'm glad you got your job back. But anyways, so like, what if that happened?

Like, that's pretty crazy. Yeah. Yeah. That's insane. I don't know. So like, it's not even just like the, there's so many existential sort of questions. Like, it's not just like, will AI rise up and kill us all? Right. Will AI like create all kinds of, you know, inequality or something like that?

But it's like, well, it creates all kinds of moral issues. No, yeah, yeah, yeah. Moral issues. Yeah, that's a whole other dimension, right? Yeah. Yeah, who knows, man? I don't know. I think for me, the bottom line is like, okay, forget doomsday scenario, forget that. Like, it doesn't need to go there. It's just the fact that I think us as humans, we can be fucked with in so many different ways. And if something comes along that's truly that powerful and smart and is...

learning about us like in what ways is it going to fuck with us right like even unintended ways or how are we going to fuck ourselves over by trying to harness this power right I just feel like we're so vulnerable in so many ways it's like it's like a human playing with an ant colony right you can just mess with them

- Well, that's like God versus human beings, right? We are so vulnerable, but like in many of those and kind of like religious religions and myths, God chose grace and mercy. Like, I mean, you know, like the whole premise of Christianity is that, right? Is that like human beings are vulnerable and flawed in every possible way, but this, you know, almighty, this omnipotent power has like,

and compassion and grace and mercy. But as you know, that's a double-edged sword because it's also responsible for a lot of bloodshed. Right. Well, the religious aspect of it. But I'm just saying conceptually, then this omnipotent power also...

paradoxically hold some type of human sort of qualities of compassion and grace and all that stuff. And then there's that salvation, you know? So it's really like interesting, like this notion of salvation too, like, and like with AI, like what's, what does the notion of salvation mean? You know, with AI. Yeah.

I think I find it, I find it funny that I'm having this conversation with you because I feel like you a couple months ago would not like really be entertaining this conversation. And I totally understand why. Like, I just feel like, because I feel like every time Howie tried to talk about this, we were like, oh, you again. Right? But now like, look at us one-on-one. Totally. Well, I've like, if you want to unpack that a little bit,

I think part of it is just trying to be curious and trying to not take things too seriously. And the fact that a lot of people are talking about, like Chad GPT was officially released, what, like a year ago? So in our consciousness, this stuff has been more mainstream. So it's gone from conspiracy theory to something that is probably...

more on everyone's mind. So you don't use it at all. No, maybe I should. Yeah. But you know me, me with technology, I'm always a latecomer. I'm always a latecomer. You know what we should do right now? I was just curious, right? But yeah, okay. So I'm going to just type in, it's like, hey, I'm Eric from the Honest Drink podcast.

I'm with Justin today. Howie is not here. We are not sure what to talk about on the show. What do you think our listeners would be interested in?

Okay. Is this GPT you're using? Yeah. Okay. Because like, okay, like we're already talking about like this shit's going to fuck us in the beep, you know, and like going to end the world. It's going to cause all these, you know, issues. Right. Let's just see. Let's just ask it. I'm just curious. So I just typed in. It doesn't even know.

Like what our podcast does. Hey, Eric and Justin, how about exploring the latest trends in the beverage industry, understanding underrated drink options, or even diving into the history behind popular drinks? You could even host a guest who's an expert in mixology. Yeah, so this is our, this is like what's going to kill us.

Now, I mean, this is GPT 3.5. There's a wait list apparently for the 4.0. So I haven't been able to upgrade. I have the free version. Last night. Okay. So I don't use this very much, but I think it's like every other skill, right? It's like, you're kind of building it up over time, but I noticed it has a new feature now where you can just talk to it. So like, check this out. Let me press this button here. Right. And you can choose a voice. Right. So like, listen to this, watch this.

Well, I'll turn it all the way up. So what's the game plan? Okay. Hello. I'm really excited about teaming up with you and I'm all set to dive in. So how can I make your life easier? Hi there. I just want to share how thrilled I am to work with you and I can't wait to get started. So what's the game plan?

Was this like the tutorial or something? No, this is like, so now it's, so what I was doing yesterday is there's this mode where you can start talking to it. I was like home alone yesterday. My wife's like out of town. So I was like bored, right? And I was like, I found this feature and I was like, I'm just going to start talking to it.

Right? Like, because if it's that good, then I'm like, I want to like, like if I can talk to like an omnipotent source, I'd be like, hey, like I'm trying to learn about this. Teach me this. Like how awesome could it be where you could talk about something? Tell me a story. Like if it was, basically it was like human level in terms of its conversational abilities and it could do everything and aggregate all that information, right? So I'm going to turn this on again. It's connecting and we could just literally talk to it.

So I'm going to start talking to it. Okay. So, Hey, um, okay. If you want to talk more about experimenting with discussing drinks during your podcast, or if there's anything else you'd like to explore, feel free to share. I mean, we don't want to talk about drinks chat about any topics that interest you. Okay. Like you're totally off track. I'm not talking about that. Okay. Chat GPT. So just reset yourself.

I apologize for the confusion. It seems I might have misunderstood your previous messages. How can I assist you or what would you like to discuss regarding your podcast today? Okay, do you know who we are? Yes, you mentioned you're Eric from the Honest Drink podcast accompanied by Justin today while Howie is absent. You were seeking ideas for your podcast content. Okay, don't talk so much. You're quite wordy, okay? Let me do the talking. He has a lot of ums for a robot.

A lot of filler words. Got it. Feel free to lead the conversation. If you need any specific suggestions or help with your podcast, just let me know. We host a podcast. We're based in Shanghai. We talk about everyday life. We talk about self-improvement. We get different guests on the show. We just like to keep it real on the show and have a really good conversation and then just go where the conversation takes us.

What do you think we should talk about with our listeners today? We've been talking about AI before, so we've been talking about you. That sounds cool. Since you're into self-improvement and discussing everyday life, how about exploring the influence of technology, including AI, on daily routines in Shanghai?

Discussing its impact on lifestyle, work, or even cultural changes could make for an interesting conversation. Plus, you can delve into how AI might shape future self-improvement methods or everyday habits. Oh, the VPN died. That's pretty good though, right? It was good, but also it kind of shows you, it gives you both sides. Yeah, it's just a little finicky. Yeah.

That's interesting though, right? Like think about five or 10 years ago, like you couldn't even get systems to speak in a way that sounded like human. Yeah. At least it sounded pretty human. Like at one point I felt like, is that really somebody on the other end?

Because you were being pretty aggressive, abrasive. And so I almost felt bad for it, which goes to say how real it sounded. But at the same time, as it was kind of spitting back its answers, I was just thinking, I mean, it feels like a web search. If you were to search the question that we asked and put it into a search engine,

And then a bunch of articles came out about that. And then you just put a voice that dictated, that read those articles out loud. I mean, that's just what it sounds like. It just sounds like it's reading back and regurgitating articles, information online, but just putting a human voice to it, right? So in that sense, it doesn't feel like it's genuinely thinking about it itself and coming up with its own

kind of unique perspective and thoughts about it. Right. At least that's what it feels like. I don't know. And again, this is just, we're taking a snapshot of what it is right now, today. I think the argument is, because no one's saying it's as smart as a human being, because they call it AGI. AGI is when it gets to human-level intelligence, and superintelligence is when it gets past human-level intelligence. All the experts are already saying, like, we're not there yet. We're not even at AGI yet. Right.

No, we're definitely not. It's not that we're not even. Of course we're not at HGN. Yeah, we're not at HGN. We know because we would know. So the debate isn't like, oh, this thing should be as smart. The thing is they're looking at the core of the technology. And the core of the technology, it has the ability to eventually get there at a very fast rate of speed. And so that's the concern. I mean, it can already copy like crazy, right? Like what you just said about the website, it's way more –

efficient than a website because one thing with the website is you google search and then it gives you like 20 websites and then you have to go find the information yeah

This one, you could be like, you could be so precise and give me advice in 17 words. And it would do it in 17 words. And it has an infallible memory and it has access to everything. So it's way more powerful than pretty much anything. And that's why it's like, it's almost like those sci-fi films that keep feeding it, feeding energy. And it just becomes like a monster. That's what they're doing right now. That's why they need more and more like, you know, servers. And they need more and more, you know, they were saying,

Now that it's scoured like all the written like web, like now they're trying to feed it podcasts, feed it music. They're just trying to feed it like a monster. And the more information it feeds, then it can like get, it can synthesize more. So in this particular case, like this is really useful because even if it's not as good as a human being, it has much more knowledge than a human being.

So even if it can't act perfectly like a human being and respond to you in a way that you feel comfortable, it's pretty good. But then it has one trillion gazillion times more information because it's actually scoured all that stuff, right? Yeah, yeah, yeah. So now if I say, I'm going to ask it now specifically, right? And it's connecting. Yeah.

But, you know, this is no different than like a human being. Like you put them on the spot and then they're just like, ah, you know, I don't know. Right. So like, I'm just like right now it's like my connection. It's just because, you know, the firewall and all that stuff. Right. But the question I wanted to ask it was like, give me one specific concrete suggestion on mindfulness like that can help be helped. Well, I think it's more interesting just to give us a topic to talk about and something that doesn't have to do with AI. And just see, I just felt curious what it spits back.

Okay, so Justin is asking, what's one topic we can talk about on the show today that has nothing to do with AI? Since you're based in Shanghai and discuss everyday life, how about exploring the cultural diversity within the city? Discuss the local cuisine, traditions, festivals, or even share interesting anecdotes from your experiences in Shanghai.

It could be a refreshing break from AI-related discussions while still engaging your audience with captivating content about daily life. So captivating. That actually, you sound like a commercial. Like you just sound so fake. Can you sound more real? I'm glad you liked the idea. Exploring the diverse cultural aspects of Shanghai can be quite fascinating for your listeners.

You just repeated yourself and you sounded really fake. If you ever need more ideas or assistance from the podcast, just reach out. So I think it's just more like the system. See, the thing is, like, if you can feed it all of the...

It's going to come on in the middle of the night and just fuck with you. No. Okay. No, the cool thing is- You have made it an enemy. The cool thing is like it captured all the chat and you can delete it so you can see the transcript. But anyways, if we fed it all of our podcasts, see the problem is it doesn't have any context because the 3.5 version is like only up to date until a certain point. It's not as powerful. But right now you can feed it specific models. So if you're a company, you're an organization, you're a church, you can just feed it all your old info.

Right. So if we fed it all of our podcasts, it would totally like, I mean, it's pretty, that was pretty damn good. Wasn't it? I don't know. I don't know. I just, to me, I just can't get over the feeling that it's, it's more that it's, it's making the interface and usability much stronger and more convenient for us to use. And it can get more specific of like, it can be a more like powerful tool and,

But I just feel like the actual information that it's telling us is still just... Superficial? Yeah, it's still just... Well, not so superficial. It's still just the information that we would be able to look up ourselves on the internet ourselves. You know what I mean? It's not giving us any...

better information. So what would make you feel like this is a technological breakthrough or was closer to AGI? If it gave us information, I was like, oh, damn, I never thought about that. But that is really good. That's really good advice or that's a great topic. Something that actually impresses you, like let's say you were talking to a really smart, articulate person who's really thoughtful, has a lot of world experience, right? And you were to ask them and they were to give you some gems, like that would be great.

If it's just spitting back a generic article that I could probably look up online, yes, granted, it would take me more time and it wouldn't be as convenient. But nonetheless, I can still find some generic articles online about whatever I search and read it back to me. That's not...

Mm-hmm. That's not impressive to me, and that's not very helpful to me. So it just sounds like cookie cutter, and it sounds generic. Yes. Okay. Now, let me push back then. The way that it talks, the way that it phrases things is super duper dynamic, because I've tried it. If you ask the question multiple times, it's not like it just repeats the same thing. It can say the same thing in different ways. So in that

aspect of language, it's already like light years ahead of what we had before. Remember the old text to speech stuff? It was like, hello, welcome to that, right? Like it actually sounds like human and it's easy to listen to. Now imagine that most, almost every piece of information is somehow already captured, right? And like, yeah, there's lots of stuff locked in your head, but like of all the speeches, all the writings, like everything that's captured somehow or is uttered

in a way that someone else can process it. Imagine if like the world could capture all that stuff. So anytime an idea, anytime we talked, like someone was listening and it all got into the system, then this system would get there. Yeah. Like it wouldn't be like uniquely Justin, but it would be so powerful because it would have like everything in it. Yeah. And you would be able to get the best insights, you know, and then like,

I think, I mean, that's just like the first step. So it's pretty crazy like where it's gotten like that. They figured out something like LLM that can actually even be applied to like graphics. It's just mind blowing. No, no. Yeah. It's the technology itself is impressive. It's just that maybe it got too hyped up for me. Right. Because I've never used, this is literally my first time using chat GPT. Um,

So, you know, with all the discussions we've had and everything that's like, I think I just probably expected more right now from it. But I mean, yeah, like you're saying, it gets better the more you use it and the more information you feed it, it gets better because it's learning and learning and learning. Right now, it's starting from a very basic level of understanding about what we want and what we're looking for. So, yeah.

yeah maybe like maybe my expectations are a little too high my standards are a little too high for like a first time use right like even if it's like a human being like you gotta feed a baby like

18 years for it to do anything meaningful. Well, yeah. Well, like you said, if we just ask somebody, like an intelligent person that was sitting right here in the studio with us, like, what should we talk about? They'll probably just be like, I don't know, talk about some cultural things in Shanghai and everyday life. It would be like saying pretty much the same generic thing, right? Yeah. And then the other piece is based on what you just saw, if this was...

So this was probably like 1 million times better than what we had before. Right. But imagine it was exponential. Imagine that like you could compress the amount of time and that there was something that's a billion times better than this in just one year. Well, that's, that's what that's the scary shit, right? Like, cause you could kind of see where this is going. Yeah. And again, chat GPT is just one product.

out of like an infinite number of other products. It can, you know, so it's like, it's not just about ChatGPT. ChatGPT is just one thing. Yeah. Yeah. So it's like the applications of this and its exponential learning capabilities. Yeah. It's, it's, it's, it's can't really fathom it.

Yeah, I mean, I don't understand the under, like, really the underlying technology, but imagine, but I know that it can, you know, these things can do, like, basic code or whatever, or can even help you correct your code. Imagine you're like, Chad Chibiti, take yourself and improve yourself, make yourself more advanced. Like, I don't know, man, it gets pretty insane. Well, I think that's, that is part of its programming, right? Yeah.

Is to learn. Yeah. Yeah. So, okay. So now that like, you know, I've taken you into this rabbit hole, like it's ironic that I'm the one doing this, right? It's like- Of all people. I'm actually Howie. Like actually Howie called me earlier today, right? But like, are you going to- Well, like part of me feels like you're doing this intentionally. I'm not. Just to F with Howie. No. Because he canceled last minute and we know this is like his favorite topic. And because he's not here, we're like purposely talking about it now. Yeah.

I just think it's so on everyone's mind right now. It's like the biggest news. Sam Altman gets fired, and then it's literally the headline. Yeah, but maybe you're also in the bubble too. Yes, it's a huge headline, huge headline. Everyone's talking about it. I'm not denying that. But I would think it's not on everybody's mind. Honestly, it wasn't on my mind. When I saw the Sam Altman headline,

At first, I did not even click into it. I didn't have the interest level enough to even click into the headline. I just wasn't interested. I didn't care. I'm not in that world where AI is on the top of my mind every day. I don't think about it that often. I really don't. And I'm not saying that's a good thing. Maybe I should. But I'm definitely not, like, it's not in the front of my mind. It doesn't keep me up at night.

- I'm like trying to think of this image for you. It's like the people, I can't even think of it. It's like, I don't know, like the group of people, like something's happening, like there's a meteor coming, right? Like everyone goes underground and then you're the one that just gets like squashed. - No, no, no, no, see you can't even do that because you don't have a leg to stand on here. Because I remember, this was like probably a couple months ago,

You were saying the exact same thing I'm saying. You're like, I don't, it's like, it's not something like we can't even do anything about it anyway, even if it is this thing. So it's like, it's not something I'm going to bother, like really like, like, you know, worrying over or thinking about you. Like, I'm not, you didn't say that exact thing, but basically that's what you were saying. I remember that specifically. Yeah.

You were not really that into AI at all. Hold on, hold on, hold on. Or thinking about it. Hold on, hold on. So you were just the same as me. No, no, let me clarify. No, no, hold on. Let me clarify. Because you said when you saw the whole Sam Altman thing, you didn't even bother clicking in, right?

I've been reading about Sam Altman before 2022. Like I've been, I read his original paper. So I've been interested in the topic just in general, but I wasn't going down the rabbit hole, like, and like being like, okay, like this is how the world's going to end. I really didn't give a shit. But I was like everyone else, like not everyone else. I'm not like an expert by any means. I mean, I'm, I'm a complete naive person on this, but I was trying it over time. Just like,

people would be and that's very different than like oh I saw like the biggest news in this whole area and then I didn't click in right so I was just drawing that distinction I'm not trying to criticize you but I did want to ask you mm-hmm

So, like, now that we've kind of gone through this, like, are you going to do anything different? Are you still going to crawl back into your little cave? Yeah, I don't know. I just don't have a desire. You're just not interested. I'm just not interested. I mean, until, it's not that I'm not interested in AI. It's just that, like, ChatGPT, for example, I am not interested in using that specific tool.

Like, especially even after today, like it's just, I don't know. It just doesn't seem like, it seems like the time it would take me to, to, to teach it and learn it. Like, I mean, I could just write something up myself and I actually enjoy the process of creative writing anyway. So, so I don't want something necessarily to help me. I don't want to handicap my own, like, like I just feel like I don't use my brain enough as is. I don't want some other tool to decrease my,

the opportunities for me to actually use my own brain, you know? So I, I, I don't feel it, but like, but maybe some other tool comes along that I don't came and think of right now. That's powered by AI and I'll love it. You know, it'll be something else. I'm totally open to it. It's just that what I know now, I just don't have a use for even like things like mid journey, like that can create photos and images and,

Like, what am I using that for right now? I don't really have a need for it, so I'm not using it. I'm going to try one more thing. Okay, I'm going to ask ChatGPT. I'm going to say, Justin says, based on your performance today, he has no interest in using AI tools. No, no, so now you're putting more than one. ChatGPT specifically. Okay, he has no interest in using ChatGPT. He doesn't see any value.

No, not that I don't see any value. It's just not a necessity in my lifestyle right now to use it. It can be very necessary for a lot of people. Okay. Justin says, based on your performance today, he has no interest in using ChachiBT. There's no necessity to use it for him. Can you come up with a sarcastic and witty response to his remark? Or try to change my mind. Ask it to try to change my mind. Can you try to change his mind?

Okay, never mind. It's like, of course, I'd love to help. What specific things does Justin need help with or what goals is he trying to achieve? So he's not like... So what's really interesting... I said he, right? It's not a he. It's like things like being a great salesperson or being convincing. Like these things are... They're like an art. They require...

like something more than just like an algorithm. Like what do you need? Like, you know, one, two, three, four, five, six, seven, eight, nine, 10, right? Well, also from what I'm learning right now with it is it's really a tool that is like when you're looking for very specific things, right? You want, you want specific help because right now we're, we've been asking it very general questions and I don't think it's really built to just be,

general like that i think like it wants me to know like okay what specifically am i doing like for example what is my job what what what tasks do i need it specifically to help with and then maybe it can help you know have some more useful things to say okay let me just ask like can you come up with a witty and sarcastic response to his comment that he won't use you

Okay. Ah, the honor of being rejected by Justin. I'll just sit here ready to impress whenever he realizes he's missing out on my witty repertoire and profound knowledge. No pressure, Justin. That's not bad. But it does say, I see some of the things you're saying. It's kind of like. It's very surface.

It's surface, but it's also very formulaic. Yeah. Right? So it's like very narrow, very rigid. And like what we need as human beings is like we don't need that automated layer. Like we need something that has depth to it. It just feels, again, I know nothing about this. This is my first time using ChatGPT. So forgive anyone that's really into this and knows how to use this thing better. But like it just feels very gimmicky to me right now.

At least in this free version that we're using, right? And maybe we're not like the best users of this too. Like maybe we don't really know how to use it the best way, but like it just feels a little gimmicky to me. Yeah, it does feel kind of like canned. I mean, I'm not actually like, it's sort of like, maybe it's like a pet and I'm like not like, like you need the master trainer to come in. Yeah. Like Howie, like he'll come in and like, like he'll make ChatGBT really come to life.

He's probably got like chat GBT like, you know, eight running on his machines, right? He's going to, he'll crack his knuckles. He'll be like, and then he'd be like, hola. I don't know. Has it been, has AI been like something on your mind more and more these days? No, I mean, it's just, I just think it's more and more in the news, you know? And I think like some of these, the capabilities are kind of interesting.

Talking about the Sam Altman thing. Yeah, so he's back at OpenAI. Has there been, because I haven't been following it, has there been any light shed on what exactly happened that got him dismissed in the first place?

Yeah, like, I mean, there's been a lot of commentary around that. It's more like theories, right? No one really knows for sure. There are reports of like what generally happened, right? But in terms of like the really detailed conversations, I think that's all, you know, we don't have that yet. Yeah, so here's the original letter from the board of directors and it kind of goes through it.

Here's the, the board stated that Altman's lack of consistent candor in his communications with them hindered their ability to fulfill their responsibilities. The exact details remain undisclosed. Yeah, so it was just quite vague. It was literally like, we're firing you because your lack of candor makes this hard to fulfill my responsibilities. Yeah, so like when I was in the US, I was like following all this stuff. It was just really, really, really fascinating, right? To...

To read about it. I listened to, you know, Kara Swisher and like Scott Galloway. They do a podcast called Pivot where they cover like technology and current news and lots of stuff. And like, they're quite influential. It just shows you like, like how much research and how much prep you have to do. Right. Because it's like, we're trying to do that right now. Well, yeah, because basically when you do that kind of show, you're basically just, you're like a news outlet almost.

You're covering news, you're giving your takes and commentary on it. You gotta know what's going on. You gotta stay up to date. - Well, it's not staying up to date in that sense. What it is is you've gotta know the details well enough so that as you are then articulating your perspective, you're able to actually describe what happened because as part of your perspective, you have to tell people what happened. I know what the points of view are in my head,

But when then you asked me to explain it to you, then I had to like go back because I hadn't like, I didn't remember all, I wouldn't, wasn't able to articulate all the details, right? Like all the names and like, but I generally know what happened. I feel like you probably know my position on like just general news these days. I just don't know how much it's, how much value it's really adding to my life to,

to read all the headlines and everything. I noticeably feel how it negatively impacts me whenever I do get into the news too much. And I don't want that kind of mental diet. - I think one thing that like, maybe helps me is like, just to zoom out and be like, don't read the news as it's news.

Like read the news as like looking at patterns, like what message are certain people try to put out there and like don't take it at face value. And it's just like, well, what is the narrative out there that people are trying to say? And then just come into it and say probably most of this is not true. Like it's almost like you have to prove that it's true to me.

I'll just take this stuff out there. Exactly. But it is hard to not let it pollute your brain too. Even if you have McDonald's sitting everywhere and you don't eat it, you still smell the fries. Right? You know it's there. I think the biggest difference that I've noticed over the years in which how I...

how I feel about the news. I don't see the news for news anymore. And what I mean by that is I think, I think earlier in my life, we would take a lot of news outlets, publications, and we would put them like on a pedestal of like credibility and like, Oh, that's the news. Like, that's like, that's what they're like. This is it. You know, this is the, this is the information you should be listening to. They, they,

They have a whole team of people fact-checking and blah, blah, blah. And this is like the gospel, basically. The way I see the news now, which I feel has been proven to me over and over again, which the way it really is, is that whenever I read an article about anything, I don't view it as, oh, I'm reading the news. I'm reading it as I'm reading what a person is saying.

Because that's really what it is. A person literally wrote that argument. A person is reporting this. It comes down to individuals because these news outlets are not news outlets. They're just made up of individuals working on different stories. And so these are just individuals. They're human beings just like us. They are fallible just like anybody else. They have their own agendas just like anybody else.

They have moral vulnerabilities just like anybody else. They have biases. Yeah, they have lack of sleep.

Maybe they're rushing a deadline. They didn't do the due diligence. They have their way of asking questions. None of it's really thorough. They also have their own assumptions. Yeah, they have their own assumptions. So I don't even take it negatively. They're just doing their job, right? And it's like, so the news to me,

is almost just a list of things that like if i'm interested in then i should go like further right because if i didn't read the news then all i would see is the stuff around me like i would have a very limited like view of everything i would just see like the roads that i walk on and i you know like i would only like see such a narrow slice of the world so like

the news itself is just one other source of like observing the world. Yeah. And like, but it's not like an objective observing, right? It's not like I'm going to a place and seeing something that's coming through a certain lens. But because the people that are reporting the news in general are like, are trying to uncover new things. Like the lens they use sometimes is,

to surface like more things than if you were just to read a specific book on a specific topic, right? - Yeah, yeah, oh for sure. - So then it's just like, it just helps you like spotlight things. But I totally sort of resonate with your point. It's like when I was growing up and it's like, okay, ABC News, whatever news, and I was like, oh shit, this is like authoritative. Like these people have done all the research.

like on one hand I was just I held them with some type of esteem and respect yeah right like the Tom Brokaw's yeah this is like the authority on information right like they really like this is their expertise this is their profession they're real journalists like you know and there's still people like that out there right and then I think like the aura of the brand right it was like oh ABC News every night at like whatever 530 they would come on and then now it's like you know the internet and just like disinformation and Facebook and

like all this stuff, but like information is now everywhere. Right. So like what you need to do is in just develop your own filters and then develop your own trust. Like instead of like, like you sort of like going to the authoritative source, it's like you now actually just need to like,

be the person vetting all this stuff yourself. There's no one to vet it for you. No one. Exactly. Yeah. I mean, the way I see it is if I'm going to read a news headline or a news article about something, I don't feel, I don't treat it any differently than if someone on the street were to tell me something. Like, I won't necessarily deny it, but I'll be like, okay, I'll listen. And I'll be like, okay, that's what you're saying.

Okay. And like what you said, if I'm interested in topic, I'll start doing my own digging. Right. Yeah. And I think like, you know, like we're kind of like on this pendulum, but like we used to kind of maybe take more things for granted. Right. And I'd be like, Oh, this is what they reported. This is what happened. And then as we, like we were naive. Right. Cause like all along, even back then that narrative. Right. And it's, I don't view it as like 100% like malicious. I just view it as like,

like partly just a function of like what side you're on. Well, it's a mixed bag of things. Well, it's what side you're on, right? If you're in the US, you're on the US's side and you're going to report things a certain way. If you're in like Russia, you know what I'm saying? It's inevitable. You're part of that group of people. Like there's not going to be that many people in Russia that are going to be writing like pro-US kind of stuff because they're not going to be around for that long. And in the US, there's not that many people that are going to be doing pro-Russia stuff

because they're not going to get a lot of eyeballs from that because there's not that many people that will support that. From a macro perspective, absolutely. But I think when you zoom in, it's a mixed bag. When you're talking about real people, individuals that work in different news publications, when you're talking about individual journalists...

It's a mixed bag of all types of things. You're going to have all different sorts of people with all different sorts of agendas. You're going to have credible journalists who are out there really trying to tell you the truth and give the facts and live by the journalistic code. Certainly they still exist. But you also certainly have people who are working as journalists that are actively lying at the same time.

And so it's like you – and in between, you have a whole spectrum of different things and, like you said, different perspectives, different backgrounds, different things that happened in their life that shaped the way they view the world. And because we're all human, we're all individual. That's what I'm saying. So it's a messy mixed bag of things. It's not –

one totally model I mean it's getting harder and harder because there's more and more sources and like and then people that don't belong or affiliated to organizations can have powerful voices because like the power structure is different but then it gets me thinking about AI again right and it's just like we've been talking about a lot of the downsides right but what if like

Like every other technology, you can always like think about, well, how can we use it for good? And what if like AI could actually go out there and filter out all the bullshit? Kind of like a fact checker, but what if it could be like a credibility sort of validator? So you could like basically look at all the patterns in the data and the logic, right? And then it could basically say, oh, like, well, this one actually has lots of contradictions. So it's more likely to be like inaccurate. And this one is like, oh, okay.

It's more complicated than fact-checking. The fact-checking, it has this idea of like, okay, well, let's go to the source. Like, it has its value. But I'm talking about more like analytical approach. Almost like a giant algorithm. Yes. That's taking everything that's going on. Yes. And looking for the patterns. Like the weather. Like, oh, is this true? No, but the weather that day was this and the wind patterns. And so probably the missile couldn't have done this. Right? So it's just like,

What if there were like entire groups of people just dedicated to thinking about how like AGI and generative AI could actually make the world better? Like what if we just went out there and be like climate change, all these things. Like let's just start like instead of fucking worrying about all the negative shit, let's just start taking the most potentially powerful weapon that's ever been created. If it indeed holds up to that potential.

Like to that people touting it. Okay. And now let's apply it to like every fucking good thing possible. No, there's when you race cars and you race motorcycles or something like that, there's, there's this teaching that says, basically you're going to go where your eyes are. The vehicle, even if you're on a bike,

You go where your eyes are looking. So they teach you how to see the road, how to see the track in front of you, how to see the next turn, where your eyes should be focusing on the turn. You're not looking like 10 feet right in front of the car. No, you're looking like several yards far out towards the horizon. Yeah.

where do you ultimately want the car to end up is where your eyes should be focusing. And then, so you're not looking really in front of the car. You're looking at where you eventually want to be. And then your body just naturally takes the vehicle over there. I mean, this applies to sports and all sorts of things. I bring that up because I think you're right. If we keep focusing on the negative concerns of AI,

And we're focusing our eyes towards that worry. Maybe that's where we'll end up, even though we don't want to be there, but our eyes are so focused on that. And if we were to focus our attention more on maximizing the positive possibilities of AI, and that's all we talked about, maybe that is more beneficial and useful in helping us get there eventually.

First of all, I love the metaphor you used about the racing. And it's almost like, you know, like your mind, you need your mind to be looking ahead of like what you're trying to accomplish. And then...

and then your body sort of like instinctively follows because you can't really control your body. And so when I think about my driving and like, especially at nighttime where you can't see that well, I realize that I have to compensate. I have to do that. I have to really look hard. And like, you know, there's some roads, there's like no lights. And what I find is that if I'm just looking at what's right in front of me, and then there's like something over there, I can't even react in time. I've got to look further out, right? And so I think that's such a powerful kind of,

It's a useful way of thinking, let's say, of everything in general, right? It's like you got to know where you're going, what's the outcome that I'm looking for, and then all of your micro actions in the moment will then follow that. And that's the intention. And so if you've got that intention really clear, then these things will just, they'll follow in line, right? I also think that with AI, it's in addition to just thinking

you know, trying to get the best out of it or like thinking of the positive outcomes. I also think like, instead of just looking at AI, it's like, it's actually going back to all of the biggest problems we face in humanity, hunger, poverty, inequality, you know, injustice, like climate change.

Right. And like actually not starting with AI, not starting with what the technology can do, but actually starting with where we want to go on the road. Right. And be like, okay, I want to solve this. I want to solve this. I want to solve this. And now let's go look at all the possible solutions of which AI is one of them. I think we're like going backwards. Yeah. We're like, we're starting with like something that like is just a tool, an application that

And we're getting lost in it somehow. And like what we need to do is like actually step back and be like, okay, here's what we're really after in humanity. Here's what we really care about. Here's our purpose, right? Here's the problem. Okay, what can AI do for us? And then actually guide AI to solve that, right? Just like finding the path on the road. Mm-hmm.

Yeah, fixating on that, right? Yeah. Like, you ever been on a bicycle and you, like, you ran into something you didn't want to run into? Like, it's because, like, your eyes were so fixated on something and then you couldn't take your eyes off, so your bike goes there, right? Like, it's the same thing. We get so fixated on it. And it's just, like, because it's, like, I guess the most...

interesting thing to talk about is to imagine the doomsday scenarios because they're just more interesting. Yeah. And then you just crash right into it. Yeah. And then that's where you end up going, even though you didn't intend to go there. At least you need to do both, right? And I don't want to be like, oh, no, we should just ignore all that stuff because there are implications if you make it too powerful, right? So yeah, we should be thinking about that. But

Like, are we thinking enough about, like, what we really want to apply this technology to rather than just myopically focusing on the technology itself? Yeah. And probably, like, the people who are really in the industry and the leaders in the industry, I mean, that's probably...

I mean, maybe besides profits, that's probably also what they're thinking. They're probably thinking about, you know, what can we really do to do good, right? And that's what I'm hoping these leaders are thinking. And I think for us, just, you know, laymen who are just reading the news and everyday people, we're probably more captivated with the doomsday headlines, right? Well, yeah.

and I think it's the people that are reporting it that are actually causing, they're also focusing on a point in the road and we're just following them as well. So really, I think that it's being generated by that. - Well, because it's the whole sensationalism of media. It's like they want to get the clicks. What's going to get a better click? - So that car analogy also applies to the media. So the media is actually setting where your eyes are. - The media is the front view window of the car.

Right? Yeah. It's giving us what we're seeing. Yeah. It's our portal to see outside the world, outside of just our immediate radius of life. Right? Yeah. So they're the windshield, basically, of the car, of the vehicle. They determine what you see. Yeah. What you're just following. Yeah.

Yeah. Just imagine if the windshield of the car was opaque and it's augmented reality and camera sensors in front of the car that would show you what the road is. They are controlling those sensors. They are controlling those cameras. Totally. Totally. Yeah. But again, it's like these technologies, we get so focused on the technologies themselves, which is fine. But at the end of the day, they're all just like reflections of

of kind of what you were saying of just us human behaviors, right? All they're doing is really just reflecting and at times amplifying, right?

our natural human tendencies anyway? Like, where does the fear come from with a lot of these things, right? Not so much the technology itself, it's what we might do with the technology or how we might use it or how it might affect us as humans, whether it's an existential threat, psychological. It's how we cope with these things. So it's more about, like, actually just the humanity and our human reaction, right?

And all those human tendencies that are already within us that these technologies kind of shine a light on, amplify or bring out. But it's really not the technology. It's just like, it's all just our human baggage, right? Well, there's a huge human baggage piece. Yeah. And I think like,

it's just like we keep building a more and more powerful weapon and it's like imagine like you know like first you have like a knife and then you make a gun and then you make a machine gun and then you make like a freaking howitzer and then you make a tank and then like like there is this drive with

within like human beings where we want to keep pushing, pushing, pushing, pushing. We're making this more and more and more powerful. But sometimes we need to just stop and pause and be like, well, why am I actually doing this? What's this actually going to do, right? Like, why am I building this ultra powerful weapon to like, just for the sake of building the weapon? No, let's actually...

pause for a moment and say, like, what can I positively do with this thing or what problems need solving? Well, wasn't that that petition a while ago that a lot of these top thinkers in the industry, like they all signed, they're basically saying that. They're like, we need to pause this. Put guardrails on it. Yeah, put guardrails on it first. And it's like, why are we pushing this so far? Like, what's the purpose? Well, that's a whole nother dimension of the conversation is the competition element of it because now it's like,

It's like people staking land in the West, like the pioneers of America, right? Like the Western territories. Everyone's running out there. It's like a free-for-all. So there's all this competition, not only within the industry, but within countries. There's national aims to this as well. So this is a competition. So people are pushing the envelope and keep on developing this.

In a way, it's driven just by the competition itself, which is also, you know, of course, a dangerous thing. We've seen how that goes with, you know, when it comes to, like, weapons and stuff, right? In your example. Well, and then, like, the motive there, of course, is, like, there's...

there's other incentives behind it, right? Yeah, we've seen Oppenheimer. Yeah, they're not just trying to maintain the lead for the sake of maintaining the lead, right? They're trying to maintain the lead so that they can hold on to power or money or whatever it is.

And, you know, and then it becomes like really dangerous when like you're not thinking about like why we're doing all this stuff beyond just that. Yeah. I mean, I think, you know, for me and you, all we can do is like pray and just try to keep up. And maybe I should be using AI more, right? I shouldn't be so just so like not in it. I should probably learn these tools are coming. I can only hope.

Yeah, like I just, I found that like, you know, we can all find our balance. So like I need to know and focus on like what really matters in my life, right? And I, and like the other day I was just thinking, I think about this stuff and like, you know, obviously like the same patterns come up over and over and over. Like it's nothing new, but I'm like health, family, growth, right?

This is something I think about all the time. But then I came up with like just, you know, very simple, like that particular version of words coming together was just three words. But this is something that I've been thinking about for a long time. It's come up on the show all the time. Right. But I'm like, yeah, okay. Yeah. It's really just about these three things, health, family and growth. Right. And that's like my North Star.

That's like almost the simplest version that I could think of. And like, I know how important health is. So health comes even before family. My mom continually reminds me, Eric, it's like, I'm not even the, you know, like family is not the most important. It's your health that's the most important because without your health, you have nothing, right? So health, family and growth.

Because the family piece, if you prioritize your own growth and your own career and success and all that stuff too much and you compromise your family, then again, you end up with nothing.

Right. And then finally growth is like, I mean, we need like nutrition and nutrients to feed ourselves, like to achieve things. And like, that's just our kind of human nature. Right. So then like, I try to like center myself with that a little bit. And then I say, okay, with this chat GBT thing, it's like, yeah, like, oh my God, like I'm going to fall up with high and I'm not using this. And there's all these fucking apps and

note-taking and I'm just like, okay, there's a lot of noise out there, right? Does this really support health family growth? Well, maybe, maybe not. So I just stay slightly plugged in. So I did download, like a year ago, I downloaded ChachiBT. At first I was like, oh, like, I don't know, that's kind of weird. I need VPN. But now I know how to turn it on and use it. If you needed me to do something, you know,

like I could do it. It's like cooking and scrambled eggs. Like I can do it. Right. But I don't try to like go too deep and like download all the apps, just like do a little bit every day, try to, you know, keep up with the conversation, read the news. Right. And so I'm not like in a full, like I'm not in a super active mode, but I'm not like totally passive either. Yeah. And so like, I'm just giving it a small amount of my bandwidth, very, very small. Um,

So it's almost like investment. Like I'm putting, you know, 1% of my portfolio into this thing. And that just gives me like an opportunity to play a little bit and then understand it a little bit and then find some use cases out of it. Right.

And that's what I do right now. And then, you know, sometimes the, just the, the, the headlines are kind of interesting, right? When you read about what, wow, like why is, why does everybody worship Sam Altman that much? Because literally 90% of the company in that company has some of the smartest people in the world. They were literally like, we're walking out and they're really good people in that company, right? They're a former colleague of mine.

like moved over to that company earlier this year to do stuff, right? And like, this person's really smart. So I was like, it's really interesting. So there's like this whole credibility angle and stuff. So this is a pretty big thing.

um the internet was super hyped like actually if you remember right all the dot-com bust and all that stuff and so like it'll just be able to like now that we have a little bit more wisdom we're a little bit older we can just see like these patterns in history like maybe dalio would you know and at the same time like certain new paradigms take hold and it's better to just jump on board

I wasn't the first smartphone user. My first smartphone was an iPhone 4, I think. So it's never too late. But if the barrier to entry, it's very low-hanging fruit just to download the app, then... But I know a lot of people that use it, and it helps them just take care of little things here and there. Yeah. Maybe I'll try to get into it. Probably should. Okay. All right. All right. Well...

Put this one into the vault. Into the vault. Another step closer to being full one-on-one friends, me and you. Yeah. Cheers. All right. I'm Justin. And I'm ChatGPT. Be good. Be well. Peace.