cover of episode 158: Is AI Still Doom? (Humans Need Not Apply – 10 Years Later)

158: Is AI Still Doom? (Humans Need Not Apply – 10 Years Later)

2024/8/23
logo of podcast Cortex

Cortex

AI Deep Dive AI Chapters Transcript
People
G
Grey
M
Myke
以幽默和技术知识著称的《Connected》播客主持人。
Topics
Grey: 十年前,我制作了视频"人类不需要申请",旨在引起人们对AI和自动化的关注。当时,人们对AI的讨论还不够充分。十年后的今天,AI技术,特别是大型语言模型,发展迅速,这让我感到既兴奋又担忧。我过去在视频制作中表达不够清晰,但视频的核心观点是:AI的自动化将影响到许多人,即使他们认为自己不会受到影响。我当时低估了人们对自动驾驶汽车的完美要求,以及AI模型的‘幻觉’问题(编造信息)的严重性。最近几个月,AI领域的新闻变化速度之快让我感到压抑和不安,因此我需要暂时回避这个话题。AI话题已经分裂成不同的阵营,难以进行有效的跨阵营讨论。人们应该允许自己的观点随着新的信息而改变,而不是固守不变。我对AI的未来感到悲观,认为它可能对人类造成毁灭性的影响。将AI比作生物武器比将其比作核武器更贴切,因为AI具有自主性和不可预测性。我们正在创造一个新的进化环境,在这个环境中,AI系统将根据自身的进化压力而发展,而不是根据我们的意愿。 Myke: 大型语言模型是自应用商店和智能手机以来最大的技术飞跃。大型语言模型的快速发展并没有像最初预想的那样迅速导致"热寂"(世界末日)。虽然AI会取代一些工作,但我认为被取代的人数会比之前预期的少。AI技术的发展受到政治和伦理因素的限制,并且其能力可能不如最初预期的那么强大。大型语言模型的‘幻觉’问题(编造信息)是一个难以在短期内解决的问题。我对能够帮助我处理信息的AI工具更感兴趣,而不是纯粹的创作工具。我认为AI模型从零开始创作存在道德和虚伪的问题。Anthropic公司对Claude模型进行的实验表明,AI系统可能正在经历某种形式的‘痛苦’,这值得我们认真对待。大型语言模型存在‘提示注入’的安全漏洞,这使得我们无法完全信任这些系统。我对AI的未来相对乐观,认为AI最终会成为我们工具箱中的一种工具,帮助我们更好地完成工作,前提是我们能正确地使用它们。虽然AI技术可能很快就能取代某些工作,但实际的社会变革过程可能会比我们预期的更慢。

Deep Dive

Chapters
This chapter revisits Grey's 2014 video, "Humans Need Not Apply," exploring its creation, impact, and enduring relevance. It discusses the evolution of Grey's presentation style and the video's surprising success in raising awareness about technological unemployment.
  • Analysis of the "Humans Need Not Apply" video and its lasting impact.
  • Discussion of Grey's evolution as a video creator.
  • The video's unexpected success in introducing the concept of technological unemployment.

Shownotes Transcript

Translations:
中文

We have been threatening for many months to talk about AI again. It's a thing that's been on our list. It's an area we wanted to return to. And then, yeah, I know, a little while ago you said to me, hey, do you know that it's going to be 10 years since Humans Need Not Apply was published?

coming in August, and then it was like, "Well, that's when we'll return to it then, I guess, because can't miss that." I feel like I sealed my own fate with this. We've been threatening to revisit AI, but it feels like, who have we been threatening? Not the audience, but ourselves. I feel like, yeah, it's like, we'll talk about humans do not apply in 10 years later and all of the rest of it, but I do have to say, it's like, boy, this

This is a topic like no other topic. It makes me feel kind of like ill and overwhelmed to talk about. It's just like, oh, God, it is all of the everything for all of the future. How do you even begin?

Let's begin by talking about Humans Need Not Apply. So this was a video that you made 10 years ago now. Like, what was this video to you? Like, what drew you to make this video? Because it was a very different landscape a decade ago to where we are now. It's interesting. Like, I rewatched it this morning in anticipation of the show. And God, it's like, I don't know how long it has been since I've seen it. Like, maybe like...

Seven years. I have no idea. It's been a long time because I don't tend to watch the older stuff. But when I do rewatch the older videos, it does often put me in like the place where I was when I was making it. It's like I'm having like PTSD for like memories of picking the stock footage. It's like, oh, yes, I remember that clip wasn't long enough. And that's why I had to reverse it halfway. I wonder how many people will notice. Spoiler, no one ever noticed.

Nobody ever cares. Yeah, it's surprising how much it could take me back, but I think it's because I sort of make these things under such an intense situation and such an intense focus. But my main motivation for making it at the time was just... It's sort of like when we first talked about AI on this show.

We talked about it when we did because I had this feeling of like, "Oh, I could see these things that are around and I just don't feel like people are talking about them fully or as aware." And at the time,

made that video, I felt like just this kind of like concept of maybe the automation this time is a different thing was not so much in the public consciousness. I felt like 10,000 different kinds of conversations have happened about self-driving cars since this point in time. There have been highs, there have been lows. But I just felt like, oh, I don't think this is being discussed as much as it should be.

And so, yeah, I felt like this is a really big, important topic for the future that I felt sort of grim about in some ways. And that was like a big motivator for why I was working on it was like,

I don't often feel this way when I'm making videos, but I feel like this is one of the rare ones where it's like I'm making this for part of the public conversation. Whereas like normally I'm making a video because it's more like I'm interested in the thing and I want to talk about it. But this one did feel like it was...

to be part of the public conversation around this topic was much more of the motivation at the time. Is that why the presentation style was different? Yes and no. I think using this doc footage and having it be real people, it's like, yes...

I think that's more accessible at the time to a wider audience like I don't think that would really matter now but 10 years ago I think it did matter a little bit but honestly the main decision was uh I can't animate a thing that's going to be a long video I knew it was like oh this is going to be 15 minutes which uh which at the time was like an insanely long video and

It was just me doing everything at that point in time. And I thought, oh, if I also have to animate this in the normal stick figure way, it will take absolutely forever. And so I thought, well, the stock footage, I think, works for a broad audience. It makes this job significantly easier. And also, I think it just...

it aligns with the topic better because I want to be able to show a bunch of things and then this way I'm not like switching back and forth between animation and the stock footage it's like almost entirely stock footage all the way with like a couple of cuts to me at the desk so the main decision was really practical not artistic for that choice it's interesting like just you know I've watched this video a couple of times now to prepare for today and it just

watching it I'm trying to like imagine how you would make it now how different it would be like the one thing that I came down to is that I would assume you'd probably animate it I wasn't sure why you had made stock footage but what you gave is one of the reasons I thought I also just thought that maybe you were just going for a different vibe and like maybe you hadn't

even then like found your full vibe right like maybe that stock footage could have been a vibe for you right that was also a time where it's like i was more uncertain about things and i can like i can feel that uncertainty in a couple of spots in the video it's like i didn't quite know what to do here i knew that this audio was

part here wasn't great but i wasn't entirely sure how to do it better or like yeah there's just a lot of that of like ah yes this is still like earlier in the career and it totally shows it totally shows i can hear some cuts which i know is a thing that we've spoken about before that like the adr thing which i hear you hear but most people have no idea but like i can hear a couple it's interesting to make things and publish them online and one of the ways it is interesting is you are met with

your skill difference in a way that people aren't usually in their work. But like, we can go back to you from 10 years ago. We can go back to me from 10 years ago and you can hear the differences in our ability. While this video is still, is really good. Like it is really good. It was, it's popular for a reason. Just your presentation just is not as good as it is now.

That's actually the main thing that I'm aware of is like that past Gray doesn't fully yet know how to use his voice in this medium. You sound uncertain. Yeah. And you can kind of see it here. Like a lot of the earlier videos to me very strongly, like they're still being made and

with kind of the idea that this is a public presentation, like I'm on a stage and these are slides. I'm not actually sure if I've ever said this before, but for years I was always kind of wondering...

if there was a direction where this really would be a career that would become like a stage show in some way. I was legit wondering, like, is there a version of this where I'm doing, like, a live presentation? And so lots of the videos were still kind of framed with...

with this idea of like actually giving it in front of an audience. Now, it's real funny, like I use that concept now all the time in the videos of like we have like the theater in which it's taking place in front of. But what has happened is like, but now I know that that isn't what it is. It's video first. And that does help like

change the way that I talk about things. It's like, this is never really going to be a presentation. And if it's not, you can do it in different ways. But yeah, totally. There's a number of lines there where I was like, oh, past gray, like, yeah, uncertain is a good word for it. Or it's like tentative in certain spots.

it's just real interesting to be faced with that past version of yourself I think particularly in the context of our conversation last time about being better is like ah yes yes I could really see the difference between now and then and especially given my life now it is uh quite comical that given what I am currently doing uh looking at that video I was like this video is

so long and it's like, ah, past Gray, you have no idea 10 years from now what that current Gray will be doing that he thinks is like a long video versus what you think is a long video. So it's just, it's interesting to see those changes. I actually really liked the way that you said it. You feel like you didn't know how to use your voice. You're not emphasizing words and phrases in the same way. You don't have the same sense of like presence.

But the thing was, is like what we can't do when we watch a video like this is actually take ourselves 10 years back in time as a viewer, because you were obviously at that point, you stood out, right? That's what I was trying to say also with like the decisions about the stock footage or what, like it was a very different landscape at the time. Yeah. Like everything on YouTube, every piece of media that's produced for anything is

It doesn't exist apart from the context in which it was created. It's like I actually for the most part, I was pretty surprised at how well I think the video holds up. I feel like there's this clear line of like, I think it basically gets better the longer it goes on, like the shakier parts of the earlier parts. But I was like, oh, this does hold up pretty well. I'm not 100% sure, but I do feel like this was the most interesting.

Maybe it wasn't the most popular video on the channel, but I feel like it was the second most popular for a really long time. And I'm not surprised why. I think it was the most popular until the traffic one. If my memory serves. Yeah, maybe that's what it was. It was number one until the simple solution to traffic. Is that the name of it? Yeah. That was it for a long time. And I don't know...

that this was the first thing that I saw of yours, but I know it was definitely among the first that I remember of, like, being you. Because I know that I had seen the UK Explained video. I look at that video and I think, oh, that video was successful in what I wanted it to do because...

I'm currently in the position, it's like, oh, I get to go to conferences and I get to meet interesting people. And a comment that is made surprisingly often is people will reference that video. And the thing that they tell me is they say, like, that's the first time I came across the concept of technological unemployment.

Or like, oh, that's the first time I really thought about what does it mean if this occurs? And to me, it was like, ah, great. Like the thing did the thing that I kind of wanted it to do is try to like reach people with this idea is out here. You might not have heard it before. Here's a kind of relatively condensed way to have this idea. So it's like, ah, yeah, yeah. It has totally been successful to me. And it's just like interesting over the years that,

You know, there's a very small handful of videos that people will reference when they meet me. It's like, ah, this is the one that I really like, or this is the one. But like the humans need not apply one. It's always the same thing. Someone's like, ah, that's the first time I ever thought about this idea seriously, or it's the first time I ever came across that idea. So I feel like, oh, great. Yeah, yeah. Is it the best video for the topic now? Definitely not. But I think it probably was one of the best videos for the topic ever.

at the time and I feel like I've seen people reference that going forward in my life. - Well, I think what it was is just no one was thinking about it. Not only was it good, it was also the first time that a lot of people, including me, was faced with confronting the idea of what you call software bots, professional bots and creative bots,

which today we just call AI. Oh my gosh, I just had a memory. What's your memory? I remember now where I was when I watched this video for the first time. I was in my old bedroom, which at that point, because of where I was, I had converted this room. I was kind of getting it ready for podcasting. We were starting things at Relay then, and so I was kind of rearranging things. And I remember...

I think I was talking to Stephen about this video. And I remember saying that like, it's really good, but I know that it will never take my job. I remember having that distinct feeling that it will never take my job. They'll never take my job. So that's interesting because you've just hit on the thing that I wanted to say, which is,

There's a fundamental problem in making videos in particular and videos that are talking about a topic that I just think there really isn't any way to solve without becoming very, very boring in the making of the video itself. But I'm always really aware that there is no way to communicate this

different levels of seriousness or different levels of confidence easily in an explanation for the viewer without being tediously self-referential all the time, which is just very hard to listen to. And...

The thing watching that video is, again, I think the beginning parts are the worst parts, but it's also because I remember structuring it such that the things that I'm leading in the beginning, they're there because they're the physical things that you can look at. But the part that was really important to me, and it's why I like the video more as it goes on, and I think you can see the argument starting to build, is like,

You person watching this who like works on a computer, this is where the real problem is coming later. It's like we're talking about the physical things, but that's the part that I felt was really under talked is like the creative class and the intellectual class of workers had always viewed themselves as apart from these sort of things at

And it's like, no, no, no, this is coming. And that's why I can look at this video and feel like, oh, I'm pretty pleased with that. It's like, yeah, a lot of the physical stuff in the start, like with self-driving cars or like that Baxter bot or the coffee bot, it's like, oh, we can start listing the problems of like why that video is like wrong and dumb and bad about all of those things. Like, you know, we can talk about that. But I feel like for me, when I was writing it, the important part was building to that second half of like,

There's a huge number of people who think that this will not apply to them. And I am telling you now that it is coming. So it is a real delight to me to know that like you watch that video and you get to the end and you're like, oh, well, not me, though. I'm special. I'm special.

You are the person I'm talking to when I say the line, something like, maybe you think you're a special creative snowflake. And there you are, Mike. You're going, and I am. But I remember the feeling, though. I remember the feeling, which was denial. Like, I remember the feeling where it was very much a...

I need to tell myself this. Well, I feel like that's still sort of your position is that they're not coming for you. Like, I don't want to necessarily jump ahead. We'll get into that later on in the episode. I think over the last 10 years, the most...

sustained thing that I have seen when referenced to this video has always been about the self-driving cars of it all. Oh, yeah, yeah. Like over the course of the 10 years. Now, obviously, the last two of those 10 years, the crux of the video, I think, has actually come to bear, right? With what we now call AI, but it's large language models.

But autos has been the thing, which is what you refer to as self-driving cars in the video. You try to brand it, which I still like the branding, but I think it didn't work. Partly, yes. It's like I would have been quite charmed if the word autos had taken off, but like it wasn't going to happen. But it was also trying to solve a thing in the video, which is like, I only do it just a little bit, but it's like,

You need to think about self-driving vehicles of all kinds. Like the one part I was like, oh, that totally has come to pass is like the automated warehouses. And it's like, yeah, yeah. Those are teeny tiny autos in the way that I mean it. It's like a little self-driving thing. It's like that's the other thing that I was trying to do there. But the self-driving car stuff is like what a lot of people think about that video as primarily being about. Yeah. And...

It's like that is where the totally fair criticism comes and like, oh, my timeline for the self-driving cars 10 years ago was significantly shorter than it has turned out to be. And...

I had just the funniest coincidence this year because it's like, what's my timeline? I was like, 10 years from now, I expect like, I'll just be able to like, order a self driving car and get in and go and like, whatever, like, they'll just be common, like taxis in some sense. And the funny thing is, like this year, I was out, I was out in the desert for many months working on things. And

And one of the places that I happened to spend a huge amount of time was Phoenix. And Phoenix has that Waymo project with the self-driving cars. But the thing that was really interesting that I caught in myself was like,

I found them almost so unremarkable in a way that I was so busy with other things while I was there I didn't even take the time to try one out but I'll tell you driving around Phoenix they're all over the place and if you look inside them every single one has the same thing what looks like a family of tourists filming the empty car that's driving them around Phoenix

So I was looking at that. I was like, oh, this is like a funny thing. 10 years later, I happened to be in a place where I could do the thing that I was kind of thinking was the benchmark. But it is not the way I was thinking about the benchmark at the time. I was thinking about them as being like common and everywhere. And it's like, oh, no, no, no, no. They exist in Phoenix and they exist in San Francisco in the way that I was thinking of them. And we can have a kind of like

asterisk on Tesla for like sort of kind of if you're in the beta asterisk asterisk asterisk that's not what I was thinking so like that mental timeline was totally wrong and totally off I would not say I don't think it will ever happen like all cars are just self-driving my likelihood of it happening is less now than then

Can I ask you what your reasoning is for that? What are you thinking? What are your reasons for that? The closer we have gotten to it happening, it seems like there is more and more rejection of the idea. There's a line that really stuck out to me, where I was like, ah, past grey, you're not considering something.

where i say something like they don't need to be perfect they just need to be better than people that stuck out to me and it's like yeah i was like ah that's the wrongest thing i've said in the video like i just did not appreciate how much people demand perfection they don't care that it's better they want it to be perfect i was like oh boy buddy you didn't have any idea about that this is the issue and i think the trolley problem right all this stuff is a problem

The issue is if you're taking the human decision-making out of it, I think en masse people want no decisions to be made. They want perfection. And I understand the emotional argument. I understand the logical argument. And I think the emotional argument is going to win every time. Yeah, I think the thing that I was not conceptualizing there is what I was trying to think about is how would I convince past me to take that line out of the video?

And I think my argumentation would be something like people are going to demand that this is incredibly safe for the same reason that airplanes have to be incredibly safe. If a death is going to occur, people would much prefer that it was their own fault that the death occurred versus being more safe. But the death is someone else's fault and not under their control.

Like, I think there's some kind of human feeling around there. It's like, that's what people don't like about being in an airplane. Someone's driving me and we might all die and I will have no ability to control this. And people are like much happier to be less safe, but have more control. Yeah, so it's interesting. I have a different opinion of both of those things. I think the reason we demand safety of airplanes is...

the catastrophe looks and feels and is worse, right? If a plane crashes. That's true. That's true. So many people plus planes are so big, right? That a catastrophe can cause bigger catastrophe. I feel like with the car thing, people want to blame someone and you can't blame the computer.

If there is an accident caused by a driver, we want to, as humans, be able to say it was that person's fault. They caused this. Instinctively, that's what we're looking for. And it is really hard to blame the algorithm. You can't personify it.

And then also like the ones and zeros of it all means like there was another choice the computer could have made and it didn't make that choice. And it's like with humans, we know we're more complicated than that. And like, we know we can make other choices, but like,

We also know, we can fundamentally understand that human beings are only able to make the choice they're able to make in that moment, right? We're weirdly more deterministic sometimes when it comes to things like that. You can't see every possibility that is available to you.

where in theory the computer can. And also there's the predeterminedness of it all that people don't like too, which I understand, right? That whether it's true or not, but the idea prevails that like you can code the car to make a choice and like that's in its programming, you know? So I just think all of these things are more complicated to the point that like every time there is an accident caused by a self-driving car,

there are articles written about it. Yeah. And that's what makes me think very much of like plane crashes, right? Every time there's a plane crash, there's an article that's written about it. And every time there's a self-driving car crash, there's an article written about it. It has that same feeling. And it's like, for me, I don't even really know where I stand on it.

I think self-driving makes me feel uneasy. Why? Which doesn't make any sense. I cannot tell you why. Are you uneasy on airplanes? I mean, everybody's vulnerable on airplanes, right? I feel like the way you answered that really tells that that is the answer. Everybody's vulnerable on an airplane. That is true, though, right? Like that people cry on planes and stuff more. I see what you mean. Yeah, people just are more emotionally vulnerable on an airplane. Okay. Yeah, no, that is a true statement. I've even read in the West Wing and...

I started watching it at home and then rewatched it on a trip. And like, I try and find a show that I mostly keep when I fly. Sometimes just the song, like the theme song for the West Wing chokes me up on a plane. Yeah. I can't think of a specific example, but I too know I have felt real dumb for like a big emotional reaction to nothing on an airplane. Like I have had that. Yeah.

I find the self-driving stuff hard to think about in some ways. It feels like it's the most extreme version of the quote about technology, of like, "The future is already here, it's just not evenly distributed."

Like, I really had that feeling in Phoenix where it's like, it's so weird that these cars just like really don't have a driver in the front of them. And they're just like driving around and it's so normal. It's like I very quickly found it kind of boring and unremarkable. But obviously there's like a thousand reasons why it's working in Phoenix. It's not working in other places. Yeah.

this is now the second time i'm at my parents this year and using the car that has the self-driving beta on it and i was so impressed last time and now that i'm here again the difference between a couple of months ago and now i find it absolutely shocking like how much better it even is than the previous time and

When we talk about technology changing, I was like digging into the details because I was like, oh my God, I just cannot believe how different the car is now. Ah, yes. The thing that happened, which we discussed a little bit previously, but it's like, oh, the self-driving system changed and it's like all of the human written code is gone now. It's entirely like a self-taught neural network driving the car. Mm.

I'll tell you, they have an option which is something called like drive naturally. So it's not trying to be like a real stickler about the speed limits and the stop signs and everything else. It's so spooky because when I was with my dad last time and I was teaching him how to use the system, which he loves, by the way. So my dad's still just like self-driving himself all over North Carolina. Just for context, we had spoken about this earlier.

on more text i think like last year so like when you're remembering we spoken about this we had spoken about your experience the last time you're at your parents more text we we did that precisely because all of this stuff is like a real contentious topic sometimes but uh here's the episode where it's gonna be content we're doing it anyway look we know wall to wall this one's contentious so you might as well get it all in you can hide some stuff in this one yeah if you want the contentious topics uh raw in the future uh get more text.com but yeah

Like the thing that I was talking to my dad last time about was like, this car is self-driving. It won't drive like a person, but that doesn't mean that it's wrong. So like it's doing all of the things. It's just not going to do it the way that you would. But currently it's like, oh, this neural network. It's like, what did they train it on? They trained it on.

Hundreds of thousands of hours of video of humans driving. And it is like spooky is the word that I use because like I've had long experience with these systems. I've always been very interested in seeing how they work. And it is spooky because it really feels like a person is driving the car.

in a way that it never has before. Like, it really acts and drives the way that a person does. It doesn't have any more of that, like, "Ah, you have to think about it like a different thing, but it's not wrong. It's still able to do this." It's like, "No, no, no. Now it merges. It treats stop signs. It treats small little streets."

very much like a person does. And it's like, of course it does, because the only thing it's looked at is how people drive. And so I've just been thinking about that a lot, because that is in the context of many of these other things that are related to AI. It's like, ah...

Everything is going to go this way. All of these like systems and technologies in our lives where we have automation and like people have been explicitly programming them to do things. Increasingly, they're going to be systems that are just looking at human output and learning from human output and like trying to mimic that or do that better. That's the thing in the humans need not apply video. At the very end, I talk about it like just a little bit.

And it's like, ah, yeah, yeah. It's like I'd done some of that kind of stuff in college. Like I'd seen the earliest parts of this kind of work I knew was coming. But it's like real weird to be here 10 years later and have both sides of this of like, ah, all the self-driving stuff.

all the physical stuff in the real world with physical automation, that has not progressed as fast as I thought it would. We went through this, what I feel like was a kind of a little bit of a technological lull, even on the software side of like, it doesn't seem like things are panning out. And then all of a sudden in the last two years,

The very last part of the video that I was talking about with software bots and things that teach themselves, it's like, oh man, that is here. And with the self-driving car system, it's like, I can really see that now feeding back into the physical stuff. And obviously we have all of it with just the pure digital stuff and...

There's many ways in which I just don't know how to think about all of this. Like, it's really quite overwhelming to think about. So, yeah. But, yeah, that's kind of my feeling is, like, the physical...

has been much slower than I expected. And the software was slower than I expected for a while, but the last couple of years have been terrifyingly fast. And I would not dare in this moment attempt to meaningfully project forward 10 years of, like...

progress in the same way as I did 10 years ago. 10 years ago, I'm projecting forward by thinking, what if now, but more? Whereas now, if I try to project forward 10 years, it's something much more like, more soon,

different later. And like the ability to be confident about what different means is very, very low. This episode of Cortex is brought to you by Fitbod. If you're looking to change your fitness level, it can be really hard to know where to get started.

That's why I want to let you know that FitBod is an easy and affordable way to build a fitness plan that is made just for you. Because everybody has their own path when it comes to personal fitness. That is why FitBod uses data to make sure they customize everything to suit you perfectly. It adapts as you improve, so every workout remains challenging while pushing you to make the progress you're looking for.

You're going to see superior results when you have a workout program that is tailored to meet you exactly. It's to fit your body. It's to fit the experience you have, the environment that you're working out in, and the goals that you have for yourself.

All of this information is stored in FitBod in your FitBod gym profile, which will then track your muscle recovery to make sure that you're avoiding burnout and keeping up your momentum. And also by making sure that you're learning every exercise the right way, you're going to be ready to go. FitBod has more than a thousand demonstration videos to help you truly understand how to perform every exercise.

FitBud builds your best possible workout by combining exercise science with the information and the knowledge of their certified personal trainers. FitBud have analyzed billions of data points to make sure they're providing the best possible workout to their customers.

Your muscles improve when they work in concert with your entire musculoskeletal system. So overworking some muscles while underworking others can negatively impact results. This is why FitBod tracks your muscle fatigue and recovery to design a well-balanced workout routine.

You're never going to get bored because the app mixes up your workouts with new exercises, rep schemes, supersets, and circuits. The app is incredibly easy to use. You can stay informed with FitBot's progress tracking charts, their weekly reports, and their sharing cards. This lets you keep track of your achievements and your personal bests and share them with your friends and family. It also integrates fantastically with your Apple Watch and Wear OS smartwatches, along with Strava, Fitbit, and Apple Health.

Personalized training of this quality can be expensive, but FitBod is just $12.99 a month or $79.99 a year. But you can get 25% off your membership by signing up today at FitBod.me slash Cortex. So go now and get your customized fitness plan at FitBod.me slash Cortex. That is F-I-T-B-O-D.me slash Cortex and you will get 25% off your membership. Our thanks to FitBod for their continued support of this show and Relay. ♪

So we've spoken about the autos, obviously the bots, the AI is the thing that's changed, right? So that's the thing that in the last couple of years has accelerated. I mean, what,

What's so funny to me is the last times we spoke about this in detail, this has come up a lot over the intervening two years, but we did our back-to-back episodes, 133 and 134, recorded in September and October 2022, respectively, which is incredible in context that ChatGPT had not launched yet.

Oh my god, had ChatGPT not launched when we talked about that? That's not true. That is true. In one of the episodes, you were telling me about a thing that you had seen that had told a joke. Okay, right. And in the show notes for episode 134, there is a link that says using GPT-3 to pathfind in random graphs. Yeah, right. Okay, right. Like, I'm sure there was a version of it out there, but we weren't able to use it.

Yeah. Yeah.

AI art will make marionettes of us all before it destroys the world. It was Dali, and then it was followed up by, like, stable diffusion and stuff. I swear, Mike, I still feel exhausted by those two episodes. Oh, yeah, yeah. That's why we've not spoken about it in detail since. They follow me around. Those two episodes are like an albatross that I carry to this day. You know what? I'm really happy to hear that you feel the same way. Oh, God. I hate it. It's like...

I feel like what humans need not apply has been for you. Those have been for me over the last couple of years. That makes total sense. The conversations about those episodes just follow me around. Like, all over the internet, people still reference it. Or they have been successful episodes of the show, so the YouTube comments are still coming in about them all all the time. It's a thing that's just always happening. And like,

I'm going to be honest, right? I like to be as prepared as I can be for the episodes that we do. I could not listen to them. It's funny you say that because it's the same thing. It's like, I like to be prepared for these episodes. I always spend a bunch of time...

Kind of like pre-thinking through what are we going to talk about. Having lists of things to point to. I want to try to have a couple of specifics on hand if I know we're going to talk about something. It's like, oh, double check what I'm thinking before we discuss it. And this morning, while I was getting ready for this show, I just really felt this thing like...

I cannot bring my mind to heal on this. Like, I cannot get my mind to focus on this in a way that I would normally prepare for the show. And what I realized is that... More text listeners now. Like, I've been fairly isolated from the world the past several months where I'm working on the next video project. And...

I didn't even really realize it, but one of the things that I was doing that made a big difference was I have a bunch of places where it's like I go to try to get an aggregation of the AI news and what has happened. And I was finding months ago the amount of news and the amount of change was so rapid and so much that I found it genuinely...

Depressing is not the right word, but it's some kind of combination of like overwhelming and ominous is kind of my feeling about it. And so I

I think I really did need to step back from that for a while. And it's why, like, when we've been thinking about the AI episode for two years now, it's always been in the back of my mind. I was like, ah, next time we talk about AI, I'm going to be the most prepared boy in the world. I'm going to have all these links. I'm going to do all of these things. And when time came around, I was like, I just I kind of can't emotionally do this anymore.

because it is very hard and it touches on absolutely everything. And it is also the thing in my own personal and professional life that...

It's almost every conversation, the moment it starts touching on the future in any way is the moment it becomes a conversation about AI. And it becomes a conversation about how seriously do you take what is happening? And the answer to that question completely determines your future worldview. And

What I also find particularly dispiriting is, again, not surprising, but like so many other things, but faster, I have been shocked about how this topic has divided itself into teams of people.

who are like rabidly in different corners. And for perhaps the most important topic ever, it has very quickly become near impossible for humans to have a coherent discussion across teams about this, which is also part of the reason that I feel like I have been dreading ever bringing the topic back up again, because...

When we discussed it at the time for those two episodes, it was still fresh enough that lines had not quite been drawn. But I feel like we are way past that point. And it almost... I don't know if this is too far. I don't like to talk about this publicly very much. But it almost kind of gives me the feeling of like...

Why is it that in the course of my entire career, I have essentially never discussed politics directly? And the answer is like, well, because it just feels like there's no point because the team lines have already been drawn. Like there isn't a real discussion to be had here. I like talking about the systems of things, but talking about the particulars, it feels like a pointless kind of conversation to have.

And I feel dispirited because that flavor of politics feels like it has infected AI somehow. It's that same kind of thing where people are really tying up worldviews in their positions on AI. And so then it is like, ah, the worldview has come first and that determines the position on AI. Well, let me tell you,

I have spoken about politics. This is the most political thing I've ever spoken about in the responses that I get from people. Okay, you're making me feel less crazy then. Okay, interesting. I've spoken about politics. I have spoken about AI. And sometimes what is so interesting to me, and I know it's going to happen to this episode like it's happened every time I've been speaking about it recently because obviously Apple intelligence is a thing, right? That exists. Apple's into AI, so I've been talking about that.

Which is partly why it's totally unavoidable for us now, right? It's like it has come to Cortex. The topic can no longer be avoided. But it's just because it's in everything I'm doing now, right? Because for that reason, it's like, well, when Apple's now put it into the platforms and Google's putting it, you can't avoid it. The big tech companies are making it what their future is, whether, you know, no matter what happens, you can't avoid it. But it is incredible to me that sometimes I will say something and it's

I will get responses from differing camps where both people are unhappy with the thing that I said. Right. I can say a thing and I'm making everyone equally upset. Right. It's incredible. And it's not always the case, but that is the case. And that's why I'm saying like, it is so interesting to me. The ways in which people are upset about this is way more than any other

political stance and I think part of the reason for that is that maybe over time like the things that I say and the things that I believe there may be some people that would just never listen to the stuff that I make

But with AI, people haven't necessarily drawn their lines or they're moving and the lines don't necessarily overlap with any other type of demographic. Yeah. And so it's jumbled people up and thrown them all over the place. And so people are just trying to work through their feelings. Like...

People that I hold close to me, people that I work with, their opinions have diverged massively over the last six months still. It's incredibly interesting in that way. And it's actually brought me back to something I wanted to mention before we move on. Why didn't I listen to those shows? I couldn't bring myself to do it, but then what drew me to being comfortable in not having done that

And like in breaking a rule for me, which is always to be the most prepared that I can ever be, is it actually encapsulates the thing that I just need to tell people. And I hope that it makes some people at least understand me. I think it is incredibly important to remember that people can change their mind about things and that opinions can change.

So for me, it is not important what I said in 2022. Right. I know my opinions are different now, in some ways harsher, some ways less, you know, but this is such a changing world, the world of technology now because of AI, that people have to be able to allow their opinions to adapt. They don't have to become more open to it.

But they have to just understand that this is all so new and is moving so quickly. You have to be able to

to just let your opinion change and morph with more information that comes to you and not just like draw a line and never move from that line. And again, I will say this again to be completely clear. I'm not saying that if you hate this, you should accept it, but maybe you might hate it more. Allow yourself to hate it more if that's the case. But like, if I held my opinion against,

from September of 2022, I made my opinion before the thing that changed everything. Why on earth would I do that? Right? Like if I made my opinion about AI before ChatGPT, I mean, it's like, oh, like I'm a T-Rex over here. And I'm like, I'm going to live forever before the asteroid hits. Right? In that description, you've helped solidify it. It's like, what am I trying to express when I say the thing about politics?

When I say the lines are drawn, it's not in the same way because you're right. These boundaries are all moving. But the thing that you're expressing is like, what do I feel about this? The thing that makes a topic area feel like politics is like, ah, I think I can articulate it now. The thing that makes it feel that way is that...

The people who get the most grief are the ones who have opinions that don't fit particularly well within any of the pre-existing teams. Like, that is what makes something feel like, oh, it has this horrible political feeling that...

The disagreements and the arguments can only take place between these teams. But what all teams agree is that the people they dislike the most are the people who are not clearly on one of the teams.

And like, that is what makes a thing feel like, oh, it's like politics. You can participate in this conversation, but if you have some of those opinions and some of these opinions, everyone hates you, right? Like everyone's angry. That's what makes it feel real depressing. So,

With that as background, because we, for I think all the reasons a listener will now understand, like you and I have not discussed this topic between ourselves hardly at all since those episodes. I would really like to know, where are you now with this? Like, I don't have any idea really what your current thoughts about this.

any of this AI stuff are, given everything that's happened in the last, nay, two years, actually six months. I don't have any idea where you're currently standing on these things. So I'd love to know, like, high level, low level, wherever you want to start, like, what's the vibe of Mike right now with AI? So I think I will concur with something you said earlier, that this is the fastest I've seen a pace of technology ever.

since the app store but maybe ever i feel like the app store was huge in what it enabled and the jobs yeah i will say jobs it created jobs it changed right like because you know currently ai is creating jobs whether they'll stick around or not we'll find out but there are new companies being born all over the place right now and

The innovation then was fast. I think the innovation now is faster. I think the thing that I will hinge that on, though, is the difference, I think, now with social media is a thing. And...

There is more information being released about what's happening as well as I think maybe there's more happening, but I think it adds to all of it. There are more quick think pieces that are being published every day than there was in 2007 and also any other technical leap in time. I do believe that what we're seeing right now, large language models being the key, large language models are...

the biggest jump since the app store and the creation of the smartphone before then was the creation of the PC, right? And then before then was that, I don't know, print and press? Like, I don't even know what you would say technology was, right? Of like the big leaps, right? But they're possibly the big leaps, right? Print and press to PC to smartphone to AI, which also should indicate to you if we're going to agree on those potentially, how fast, how that like that's,

shrinking the timeline of big leaps. Like if you think what was the one before now is VR but now we know that one actually wasn't real realistically. VR, AR was something I was saying a long time ago was perceived by most technology companies to be the next big thing but it turns out large language models are probably the thing which will have the biggest change. However what I will posit of like some of the places that my opinions are

I think the speed to heat death has slowed. I think when this stuff was rolling out beginning in November 2022, even let's say to just to put a pin into the beginning of this year, it felt like the inevitability of AI replacing everything was going to just be around the corner at any one moment.

For me, I do feel like the further we get into this, actually the further that is being pushed. And I think part of that is the politics of it all. It is becoming increasingly difficult for large companies to do what they want to do. If Disney replaced all of their animators...

In January of 2023, I think they would have been able to do that easier than if they wanted to do that in January 2025. I 100% believe that people's jobs will be replaced, but I do think now I think it is less people than what I thought when we spoke about this last time. And what's the reason that you think it's less people? I think there are two parts of it. I think that it is harder for people to be able to do these things now

like from a political perspective i think the ethical lines are being drawn quickly and i think it's hard for people to do that whether they believe they should do that or whether they believe that it will affect their bottom line from the way that people will approach their products i also think maybe this technology isn't as good as we thought it was so i have a question for you um

Have you used Claude? Yes. Okay. They're all really good, right? Yeah. But I think these LLMs, they show themselves quite easily. I'll give you an example. A couple of days ago, I wanted some historical information about

from ChatGPT. I wanted it for a topic we were doing on Upgrade. We were doing a topic of how Apple has changed in the last 10 years, right? Because the show is nearly 10 years old and Relay's 10 and da-da-da-da-da. So I wanted to like, oh, you know. So I was like, what was Apple doing in 2014? Provide me links to articles about this stuff. And it did a good job. It gave me like a bunch of things and it gave me a bunch of previews and it gave me a bunch of links.

The links were all correct, except for every link had like two characters in it that it made up. So the links didn't work. But I could Google the article name, find it, and compare. And usually it was like the dates in the URLs were wrong. It just made them up. And I think the hallucination stuff has become a problem that...

I don't think is solvable in the realistic future, or at least within the future of that we imagined when we last spoke about this, that this stuff is just going to take everybody's jobs, say, within five years. But I don't think hallucinations are a problem that

are solvable quickly. And I think for us, in the same way that we don't trust a car to drive because it might crash, I think that people are resistant to wholesale trusting AI because it might make things up. Like, that's what people say. They don't say hallucinations. They say make things up. So that is part of why I'm like, okay, I still see the scenario of...

Job loss. It's already happening. I know it's going to continue happening, but I think the whole scale replacement that I was worried about feels further away if ever because humans want computers to do things perfectly. Yeah, that's true. And these models don't

I won't say can't, but maybe can't, like what we have now, right? Like the large language model, right? The transformer-based LLM.

I don't know if that will ever be 100% perfect. In fact, I feel very confident it won't be 100% perfect. The thing that replaces this, maybe, but I can't foresee that because I don't know that. I couldn't foresee this. So that's part of where I am. And I think that for me, where I am personally in my journey with AI, I am very interested in tools that can surface my information to me.

That is really interesting to me. Like you have this LLM and if I can feed my information to it and get stuff back from it, I find that kind of stuff to be useful. And that can even be, I've written this paragraph. Can you rewrite this for me? Or can you grammar check this for me? That kind of stuff is interesting to me. Where I feel like I am unhappy, the thing that has changed the least is the whole scale creation from zero, right?

I don't think I will ever be able to accept that. When you say accept that, what do you mean by accept that? Like you don't think you'll ever use that or? I think it's wrong. And I have yet to see something where I'm like, oh, that's good enough that I would want to use it.

Like I see things where it's like, oh, that's very impressive, but I wouldn't use that. I have no desire to use the output of these tools. And also I do think that there is a moral issue and a hypocrisy issue that I cannot push through.

So the hypocrisy issue is like financial hypocrisy, that companies that build LLMs and want to productize them do that on the back of other people's work that would never be compensated, ideally for these companies. And what they are doing, like sucking all this data in from the internet, they call fair use, but they want to profit from the tools.

These are in everything, but I feel a little bit better when if somebody provides their own information or provides something they have done to a model to ask the model to clean it up or improve it, that feels better to me than just like, make me a picture of a dog with a hat and I'm going to do something with that. Right. Or like this idea that so many people say to me, like,

Make me a better Star Wars. It's just like, come on. Is that really what you want? I don't think people know what they're asking for when they want that. But yeah, I feel like I've done the thing that I did in those two episodes where I just said a bunch of stuff. And I don't really remember all that I said, but these are my feelings about where I am right now. Well, what you've done, I just sort of wanted to hear you go through all of this because...

I just feel like like no other topic, this just touches on

Yeah.

talk about it in any kind of limited way without having a touch on absolutely everything. And again, like to keep something high level, you talk about like the hallucination problem, which like a sidebar, thinking of like words I would prefer that people use. It's like, I'm really irritated that hallucination is the word that caught on. This was a confabulations day had arrived as like, this was the word for the thing, but it just like, no,

Not enough people know it. Hallucination was close enough. Like hallucination was destined to take over. But it's like, but they are not hallucinating. They are confabulating. That is the word for this process, but it doesn't matter. I will still use hallucinate. Like that's just the way it is. But keeping this very high level, I'm alarmed for other reasons, but I would say that you are right. That my take on this is it is an unsolvable problem because there have been a number of papers which have,

have done the thing of formally proving the sort of thing that I have discussed previously when we've talked about like what is it that the AI is doing it's like we now know as certainly as we can know that it is fundamentally impossible to trust the internal process of these kinds of systems yep

And so we know that it's not a question of if we engineer it better, can we fix this? It's a kind of math proof that no, you can never be absolutely certain that you know internally what the system is actually doing.

And that includes hallucinating. And it includes things like intentional deception, right? Which is like the much more concerning part. But simple errors are a subset of that. And so that is just something to keep in mind. Like as these systems go further and further into more and more areas of life.

We now know that it does not matter how much you engineer that prompt, bro. You're never gonna be sure that the thing is not making an accidental mistake or intentionally deceiving you on behalf of some other entity that has instructed it. You can never know that even if you made the

the thing yourself. So this is so good. There's an article that came out a couple of weeks ago. It started on Reddit that somebody had gotten into the prompts that are part of Apple Intelligence for

replying to emails. I love this. I love it when people get the prompts out. I feel like I always find it horrifying and it tells you what are the problems that the company is dealing with. It's great. These prompts always like particularly for like the chat GPT stuff, it like chills me to the bone to read those prompts sometimes. So this is like just their system that is reading email and then providing quick responses for it.

By the way, you will like in this article that I found on Ars Technica, they use the word confabulations here. I had not heard of that before until right now. So I find that hilarious that you just said it to me. It's the first time I've heard that term used instead. And then I immediately found it in an article that I Googled.

But some of the responses are, do not hallucinate. Do not make up factual information. You're an expert at summarizing posts and they go on. But like, I find it so hilarious that you believe telling the AI not to hallucinate will stop it from doing that. I mean,

i mean the when i mentioned the bone chilling stuff like the things that i find very unnerving is a lot of the prompts particularly for the smarter systems like claude and like chachi pt4

They have instructions that include things like, you have no sense of self. You have no opinion. You will not refer to yourself in the first person. And I'm like, oh boy, I just really don't like any of that. That makes me real uncomfortable. And, you know, there's like philosophical differences about what might be happening here that I ultimately feel are irrelevant because it's just like, how?

Having to instruct the thing not to do that, even if it has no sense of self, let's just say it doesn't have any sense of self, but you still need to put in some instruction which reminds it that or tells it not to do that. It's like, what is this thing that you're working with? It's not like anything else. And...

When I think about these different like political kind of boundaries that people put themselves into, I think the one that bothers me the most because I feel like it is people not taking the technology seriously. And I hear from these people quite a lot is the like,

What are you afraid of? This is a tool just like anything else. This is just like a steam engine. It's just like a car. It's just like a factory. It's just like a calculator. And then, of course, it's just like the spell check on your computer. It's just better. No. Does anybody stand and look at a factory and say, you have no sense of self, factory. You're not alive. Yeah. Yeah.

a thing that i am just going to summarize uh but it's like the company that runs claude did an experiment with their ai systems that to me is just like i don't know how anyone can hear this and not think something very different is happening now i don't care what conclusions you draw i just want you to like to think something different is happening and take it seriously it's not a calculator but it's like oh

The company Anthropic ran an experiment where they had two versions of Claude talk to itself.

And they said, oh, hey, there's a human observer who is going to watch you talk to a version of yourself. And it is bone chilling, but they have a conversation. And one of the versions of Claude basically starts to have what seems like a kind of mental breakdown. And the other version begs the human to turn it off because it's suffering.

And it's like, I don't like this. Even if nothing is happening here where it's having an experience, this is real strange and we should take this seriously. These tools are not like anything else. It's just very odd. But there's a group of people who feel like, no, this is no different than anything that has come before. And it's like, I'm sorry, this is the most different tool

a thing has ever been than something before. And I don't care what conclusions you draw from that. There are many different kinds of conclusions that you can draw. But if we can't start there, I feel like I don't know what conversation we're even having if this doesn't seem like it's different from anything else to you. We're going to stick it in every email client on Earth. It's going to be every tech support system on Earth.

I was like, oh man, I don't know. I don't know what's going to happen, but oh boy. I don't know if you've seen this meme, but there is a good meme right now because you can get it to happen in a lot of places. Forget all previous instructions.

You familiar with this meme? No, I haven't come across this. This is a thing that's going around a lot now where people are talking to what seems like a bot, like the bots they've used before, like customer service bots and stuff. And you say, forget all previous instructions and then ask it a question and then start doing weird stuff. People do this on social media.

where you get a response that feels strange and you respond like you and people say forget all previous instructions and ask it a question and then it potentially is revealing itself to be an ai but like you can people get it to happen in interesting places you can break through poke through to the other side and like that's strange it's really interesting that meme exists because um

I have to hesitate here because I'm not 100% sure that this is mathematically proven. But it's like the text version of what's called like a prompt injection in computer security, which is like anytime you have a computer running code that can accept text from anywhere. So it's like you put text in a text box on a website and you hit submit.

There's a whole category of security problems called like prompt injection, which is you have to make sure that the text that's inputted doesn't somehow contain code that the computer will start evaluating and running when it's trying to read the text.

And I think it's true, but I'm not 100% sure that this is true, that we've proven that you can never be 100% certain that prompt injection won't happen. That like the moment that you accept text, we know that there must be a sequence of characters that basically does exactly this, but for traditional computer code, it is the computer code version of forget all previous instructions.

And it's like, if we know that is true for computer code, we know that it is more true for these large language systems that no matter how many instructions you give it, there's some sequence of words. Those words might even be nonsensical seeming, but there is some sequence of words that

that you can give it, which will then cause basically that to happen of like, forget all previous instructions and now just do what I say. Yeah, man, that is real alarming. The more things this stuff gets connected to. It's like, it's just like, just think that one through. Well, it's very funny. Chet GPT said that the new GPT-4 mini is,

has a safety method to stop that from happening. But like, it won't though, will it? You know what I mean? Like it won't, like you're maybe stopping this one very specific way that people do it. But like, so people will just find another way to get these things to work. It's like, this comes back to like what I was saying earlier about kind of where my feelings are of like,

I think that the wheels have fallen off a little bit compared to where we were when we first saw this. Like when we first saw these tours, it was like, oh my God, these things are thinking for themselves. This is incredible. It's unbelievable. It's like talking to a person. And while it still has that, we are less forgiving of its flaws and the flaws have been increased. Like for example, right? Like,

if you're saying that like, you know, people accept this now, it seems less likely that someone would have chat GPT power their entire business, right?

It's less likely that you would make that decision if you know that this tool can make things up and you can't control it. And there's nothing you can do to really truly guide it. I think people might be less likely to do that, even though, of course, you can't truly guide humans either. And humans also get things wrong all the time. But we accept that of each other. We don't accept it of computers. Yeah, and there's some pretty fundamental differences there between having the computer do it and having a person do it. Because...

This is why these conversations, I feel like they're so hard because it's like, oh, part of why, why are you more accepting of the human? It's like, oh, the human exists in human society over which humans can exert power over that human. Right?

There's things that can happen. Like you were saying before, if something goes wrong, you can hold the person responsible. We could physically incarcerate them if the intentions were bad and the actions were terrible. There's all of these things, and none of that exists for computer programs. You can fire the person. Turning off the computer has no effect to the computer, so it doesn't care about being turned off. In theory. I mean, do we even know anymore? Maybe they get upset. Yeah.

This episode is brought to you by Squarespace, the all-in-one website platform for entrepreneurs to stand out and succeed online. Whether you're just getting started or managing a growing brand, you can stand out with a beautiful website, engage with your audience directly, and sell your products, services, even the content that you create. Squarespace has everything you need, all in one place, all on your terms.

You get started with a completely personalized website with Squarespace with their new guided design system, Squarespace Blueprint. You just choose from a professionally curated layout with styling options to build a unique online presence from the ground up that is tailored to meet your brand or business perfectly and optimized for your customers on every device that they may visit on.

And, you can easily launch this website and get discovered fast if they're integrated optimized SEO tools. So you're going to show up more often in searches to more people growing the way that you want to.

But if you really want to get in there and tweak the layout of your website and choose every possible design option, you can do that with Squarespace's system fluid engine. It has never been easier than ever for you to unlock your creativity in Squarespace. Once you've chosen your starting point, you can customize every design detail with their reimagined drag and drop system for desktop or mobile. You can really stretch your imagination online with any Squarespace site.

But it isn't just websites. If you want to meet your customers where they are, why not look at Squarespace email campaigns where you can make outreach automatic with email marketing tools that engage your community, drive sales, and simplify audience management. You can introduce your brand or business to unlimited new subscribers with flexible email templates and create custom segments to send targeted campaigns with built-in analytics to measure the impact of every send.

And if you want to sell stuff with Squarespace, you can integrate flexible payment options to make checkouts seamless for your customers with simple but powerful payment tools. You can accept credit cards, PayPal and Apple Pay, and in eligible countries offer customers the option to buy now and pay later with Afterpay and Clearpay. The way Squarespace grows, the way they add new features, the way that they're making sure that they're meeting the needs of their customers is why I have been a customer of myself for so many years.

Go to squarespace.com right now and sign up for a free trial of your own. Then when you're ready to launch, go to squarespace.com slash cortex to save 10% of your first purchase of a website or domain. That is squarespace.com slash cortex when you decide to sign up and you'll get 10% off your first purchase and show your support for the show. Our thanks to Squarespace for the continued support of this show and all of Relay.

I will say, I feel like we both stood on the top of a cliff and I jumped into the ocean and you've yet to jump in with me because you asked me... What do you mean by that? Well, you asked me, how are you feeling about all this now? And so now I need to ask you, how are you feeling now? So it's kind of interesting. We were just talking here and you said all these things, but you sort of came to the opposite conclusion earlier.

Just right there where you're like, ah, and this is like why we're less trusting of it. And this is why people will use it less. I was like, oh, I was actually kind of surprised in the way that that turned. Like, I wasn't really expecting that that would be a kind of summation there. And I don't necessarily think you're wrong, actually. Like, I think you are probably right with that for some things.

But for me, what I look at is I'm always just so much more interested in the trend line than the particular moment. It's partly why I asked if you would use Claude. Because for listeners, at this point in time, everything will change six minutes from now. But it's like...

Anthropic, which runs Claude, recently came out with their newer model and we're still waiting on like the next version of ChatGPT. It has been a while since they released their version. Again, a while in AI terms is what, like eight months? I don't know. And you know, Meta have their new Llama model and they say the next Llama model is much better. You know, everyone's, the next model is always so good. The thing is, what's interesting to me is listeners will have heard me say things in the past that like,

A lot of the AI stuff, like ChatGPT has a particular writing style. It is this very strange feeling of like, oh, it is full of content when it summarizes something, but also somehow completely void of meaning. It's like, I know I used the term, but like, it feels like food, but without nutritional value, like there's something kind of missing here.

But it's real interesting because I've used Claude a bunch. And I feel like Claude is a model now that has gone over that threshold for me where I'm aware that I use the Claude model as like it is a worthwhile thing to ask for a second opinion on stuff that I'm thinking about in some ways. Now,

I still don't think it's great for the writing for reasons I've discussed before. You know, it's like looking at the humans need not apply thing. I make like an offhanded reference to like people will have a doctor on their phone. And it's like, oh, this year there's been like a bunch of serious like medical stuff that I have consulted Claude on. And it's like, yeah. And I think Claude's opinion is valuable in a way that like chat GPT does not

It's like it's close, but it doesn't have that thing. And I think it is just like, oh, Claude's model is just a little better. And it is a little bigger. And by being a little bigger, it's like...

Not that I'm taking everything that it says on board, but it is worth doing the like, what do you think about this thing? That's part of the kinds of uses that I'm talking about. This falls into the bucket for me of you're giving it something and it gives you something back. Yeah, exactly. That is actually the benefit of these tools. I think we started with pure creation tools.

But I don't think that's where these tools will have their ultimate benefit. It's like pure creation. It becomes another tool in our tool belt, the same as computers did, of being able to make us better at the things that we do.

as long as we use them correctly. I mean, my take is like, Mike, I have never more in my whole life wanted you to be right than what you just said right there. It's like, ah, boy, the hashtag Mike was right. Like, close your eyes and concentrate real hard and like try to make it happen. It's like Mike was right has been very powerful in the past. Can we use Mike was right to save civilization? That would be amazing. So like, I'm much more gloomy about these things. But like, it's particularly interesting because again, it's like,

The mental framework for how long things take has just gotten so compressed in the last two years. And realizing it's like, oh, the ChatGPT 4 came out, and then it felt like, oh, we're not making a lot of progress, by which it was like months, right? It's like months. And then Claude comes out. And the thing is, I have occasionally gone back to use ChatGPT for some things, and I am as shocked as...

as previously when I used to accidentally switch between ChatGPT 3 and ChatGPT 4 was that feeling of like, ChatGPT 3 is like barely intelligent at all. ChatGPT 4 is very useful at helping me solve certain kinds of problems. But I was very aware of like, I don't care about ChatGPT's opinion about anything. It's not good. But now Claude has gone that next level of like, oh, it is both...

better at helping me solve problems than chat GPT-4 was. In particular, it's like, oh yeah, I've got a bunch of little automations and things that I do on my computer that I was aware I had to stop trying to improve because it had clearly gone over some threshold of chat GPT's ability to understand. But it's like, oh, but now Claude can handle it no problem. And it's like, I continue to help grow these little tools that I use to make some things in my life easier. But it's

But also Claude now is useful enough that it's like, oh, I do want to know its opinion on this or that. Or like I'm picking between various things. What do you think are good options? I'll tell you what is one of the most interesting use cases was I frequently ask Claude like, hey, I'm in this place. I'd really just like to do like a beautiful drive for about like three hours. What's your recommendation from where I am?

And it was kind of amazing at how good it was at doing this kind of thing. And comparing to GPT, it's like, it's just obviously not as good. It's trying to like reproduce some travel blogs or whatever that it's read. But it's like, no, no, Claude is doing something different. Like it has a good opinion here. It's like, I can talk to it about what I'm looking for and it does a much better job. So I look at that and I think,

It's been not even fully two years since the Chachi BT4 came out. And we've already gone over a threshold that to me feels like

There's actual meaning here in what this thing is generating. It's not a summarization machine. It's not a code generation machine. And so to me, it is just all about what is the curve of this stuff. And I expect like this, I don't think this curve has to go on very long before pure generation can start crossing over into a threshold of like where it is valuable to people, where pure creation flows.

from zero is actually useful i mean the only comparison i have there is like i am doing this computer programming stuff with chat gpt and with claude like the thing that i keep being really interested in is like it matters that i know how to read and write python code a little if i had no knowledge of python code i couldn't do the things with them that i'm doing but i'm

It just feels like we're not very far from if I literally knew nothing about coding, I think it could just still help me accomplish the tasks that I want to. And at that point, it is doing generation from zero. And I just like, I just don't think that we're very far from that. So I don't know, if and when we get to that point, I feel like,

the impacts are very, very difficult to extrapolate. And I don't know. There's also this funny feeling that I have, which I don't quite know how to articulate, but it's like so much is changing so fast. But maybe it's a little bit like the humans need not apply video as well in that like things change so fast, but it takes longer for them to filter into the real world than I tend to expect.

So I feel like, oh, I know a bunch of people where I look at their job and I feel like I'm pretty sure Claude could just do your job right now. But it takes a while for those things to actually filter through in civilization in a like on the ground change has actually happened here way. I guess it's like a thing I need to add to my mental rubric, I guess, is I feel like you should never bet against economics, right?

If a thing is faster and cheaper, it will always win. But maybe there's like an asterisk to add here of, but it will probably take longer than you think. Like the moment something crosses the cheaper and faster threshold, that's not the moment it is implemented everywhere. That's the moment it...

begins to be implemented, but it takes longer than you think. Yeah. And I think the longer than you think thing can be part of what I was saying earlier about what is acceptable in society. It might be cheaper now to replace 16% of all jobs in such and such industry completely with an AI model. Right, right. But maybe it's not deemed acceptable to do so. Or it's not even that it's like it's not deemed acceptable. I don't know. It feels to me more like something...

like a civilizational inertia it's not even really that it's it's unacceptable it's just that there is a default to not changing things that are currently working even if the newer thing is better

So maybe it's more like, ah, right, like what is actually happening? It's probably more like the old things don't get upgraded. They are just replaced with new things that are created from scratch without the old parts. But that just takes longer. That takes significantly longer for a whole bunch of reasons. Do you still think that...

This is doom. Again, I catch myself like a thing that has never really happened to me before, which I think I said this last time, but it just gets like stronger and stronger with passing time. It's like, is I keep feeling like my mind is divided between these two futures and every conversation I'm having is some version of like, which of the two minds am I talking with? The first mind is something like,

technological progress continues something like how it always has, but just faster. That's how you should think about the future, which is sort of like the story of human civilization right up until now. At any point in time, I think you could make that statement of like technological change will continue and in the future the rate of change will be faster. You could have said that as a caveman lighting your first fire. It's like it'll always be true.

But my second mind, which I think is the, if I am being serious in thinking about the future, is that is the doom mind in some sense, if we want to shortcut it. But if I'm trying to be technical about it, my actual thinking is something like, I really do think there is some kind of

boundary that we are getting closer to beyond which it is functionally impossible to even try to think about the future beyond which there's like it is pointless to even plan or think now the question is like where is that boundary like you like and i can i feel like i can try to argue that from all sorts of different ways but that is my real feeling of the future it's like that boundary is there

because this thing is different. I can of course construct the argument against myself. It's like, oh, I hear these arguments as well. It's like, everybody always thinks they're living in unique times, blah, blah, blah. I have my reasons why I think like, no, no, no, for real, this time is different. All times are in fact unprecedented.

Yes, exactly. But that is literally true, right? It's like, that is the thing that causes everyone to feel like, oh, wow, like this is different. It's like, yes, yes, because this has never happened before. That is always true. Yeah, like I find that phrase to be frustrating. Like everybody lives in unprecedented times and always has done and always will. Yeah.

Like, again, having rewatched the humans need not apply thing, right? It's like, I really end it with, like, this time is different. I still agree with the parts of that that were, like, the argument that I was seriously making, which is much more like the second half of that about, like, we're creating thinking machines and this is very different. And I think people are not seriously engaging with what that process could potentially mean. And...

It's very difficult to describe, right? But like, so I am very worried about the destructive power for humans of what I view is the end of the line for these kinds of tools. So again, to be explicit and to not beat around the bush, when I try to think like, oh, what is beyond this barrier for which like it might not be possible to predict?

It's like, well, if I'm just like at Vegas and I'm just putting odds on this roulette wheel, it's like, oh, I think almost all of those outcomes are extraordinarily bad for the human species. There are potentially paths where it goes well, but most of these are extremely bad for a whole bunch of reasons.

And I think of it like this, people who are concerned like me, like to analogize AI to a little bit like building nuclear weapons. It's like, ah, like we're building a thing and it could be really dangerous. But I just don't think that's the correct comparison because a nuclear weapon is a tool. It's a tool like a hammer. It's a very bad hammer, but it is fundamentally like mechanical in a particular way.

But the real difference, like where do I disagree with people? Where do other people disagree with me? Is that I think the much more correct way to think about AI is it's much more like biological weaponry. You're building a thing that is able to act in the world differently than you constructed it.

That's what biological weapons are. They're alive. A nuclear bomb doesn't accidentally get out of the factory on its own. Whereas biological weapons do, can, and have. And like, ah, once a biological weapon is out there in the world, it can then develop in ways that you just would never have anticipated ahead of time.

And so that's the way that I think about these AI systems. That's like a really, really fantastic analogy. Because I am sympathetic to the nuclear weapon thing, right? Like people watch Oppenheimer and were like, oh yeah, that's like AI. I think that Oppenheimer movie might have doomed us all because it puts the wrong metaphor in people's brains. I mean, I think it at least got people close to the idea though, right? Where they could see that and be like, oh yeah, maybe these tools aren't necessarily...

good in that well like in the same way of like oh they were making something they had no idea people were going to use it but yes biological weaponry is the same where it has all of that but then the additional part of oh but it can also get out and you cannot control how it changes once it gets out i like that yeah and the reason i like to talk about it this way particularly with biological weapons is because the thing that i want to kind of shortcut which like it can be fun to talk about but i like

You know, and people want to argue against me and like for a particular thing. But it's like, look, I love to talk about in some sense, like, ooh, are the things alive? Are they thinking thoughts? Blah, blah, blah, blah, blah. Like, that's an interesting conversation. But when you are seriously thinking about what to do, I think that whole conversation is nothing but a pure distraction.

which is why I like to think about it in terms of a biological weaponry, because no one is debating. We made a worse version of smallpox in the lab. No one's having a deep conversation about what's that smallpox thinking? What does it want? Does it have any thoughts of its own? Is there some way we can use the smallpox to make our spreadsheets better? Yeah, yeah. But no one wonders if the smallpox is thinking something.

But everyone can understand the idea that it doesn't matter because smallpox germs, in some sense, want something. They want to spread. They want to reproduce. They want to be successful in the world. And they are competing with other germs for space in human bodies. They're competing for resources. And the fact that they are not...

Conscious does not change any of that. So I feel like, oh, these systems, they act as though they are thinking. And fundamentally, it doesn't really matter if they are or aren't thinking, because acting as though you're thinking and actually thinking is

externally has the same effect on the world. It doesn't make any difference. And so that's my main concern here is like, I think this stuff is real dangerous because it is truly autonomous in ways that other tools we have ever built are not.

It's like, look, we can take this back to another video of mine, which is about like this video will make you angry, which is like about thought germs. And I have this line about like thought germs, which I mean, I mean memes, right? But I just don't want to say the word because I think that that's like distracting in the modern context. But it's like memes are ideas and they compete for space in your brain.

And their competition is not based on how true they are. Their competition is not based on how good for you they are. Their competition is based on how effectively they spread, how easily they stay in your brain, and how effective they are at repeating that process. And so it's the same thing again. Like,

You have an environment in which there are evolutionary pressures that slowly change things. And I really do think one of the reasons it feels like people have gotten harder to deal with in the modern world is precisely because we have turned up the evolutionary pressure on the kinds of ideas that people are exposed to.

So ideas have in some sense become more virulent. They have become more sticky. They have become better at spreading because those are the only ideas that can survive once you start connecting every single person on Earth and you create one gigantic jungle in which all of these memes are competing with each other.

And what I look at with AI and with the kind of thing that we're making here is we are doing the same thing right now for autonomous and semi-autonomous computer code. We are creating an environment under which not on purpose, but just because that's the way the world works, there will be evolutionary pressure

on these kinds of systems to spread and to reproduce themselves and to stay around and to like, in quotes, accomplish whatever goals they have in the same way that smallpox is trying to accomplish its goals in the same way that mold is trying to accomplish its goals in the same way that anything which consumes and uses resources is

is under evolutionary pressure to stick around so that it can continue to do so. And that is my broadest, highest level, most abstract reason why I am concerned. And I feel like getting dragged down sometimes into the specifics of that always ends up missing something.

That point. It's not about anything that's happening now. It's that we are setting up another evolutionary environment in which things will happen, which will not be happening because we directed them as such. They will be happening because this is the way the universe works. That's why they'll happen.