It's been two years since we kicked off the Possible podcast. I sort of can't believe it. So we thought we'd take a moment to return to the beginning, our series launch episode with Trevor Noah. Listening back to the episode, we're reminded how quickly the world of AI has advanced in just two years' time and what is still ahead of us.
So before we rerun the episode, and it's a good one, Reid and I will revisit a few areas of our conversation with Trevor to discuss what has changed in the AI space since Possible launched those two long years ago. Let's get into it. So what's interesting is we launched the Possible podcast the same week that GPT-4 was announced, and this was just a few months after ChatGPT was announced and released into the world. And I remember...
remember thinking then, like, there was talks about OpenAI, like, running away with it. Like, it was the only model we knew, you know, even though others were sort of cropping up, it was all of the focus was on OpenAI. And now we have Claude, obviously, Inflections Pi, we have Lama, there's Gemini, just so many other folks who are in this space. And so in the Trevor episode, he called GPT-4 a major leap in tech. And
So in hindsight, with all of the advancements that had happened since then, how much of a leap was it, GPT-4? And then what do you think has changed in the way we use and understand LLMs today? Well, I think it was a serious major leap in part because, you know, GPT-4 even passed the initial chat GPT with 3.5.
started showing actually in fact how these large language models could actually deliver
compelling results. It could deliver an instant Wikipedia page crafted to what you were thinking about. It could deliver kind of almost like a young, you know, kind of research assistant, but who is sometimes extremely fictional, you know, hallucination, but could generate an immediate kind of research result. Even before they actually had, you know, kind of rag and call out to the internet for doing things.
And I think that was the reason why it was a major leap. And that's one of the reasons why it became the standard by which, you know, Inflections Pi or Claude or, you know, Gemini was kind of measuring, is it GPT-4 class? And the fact that other folks then made the leap doesn't mean that that wasn't the fundamental leap.
And even to give some sense of what the persistence of that importance of that, of Chachi BD4 is,
was that when you look at kind of these other major leaps, which includes, you know, O1 and reasoning chains or O3 in, you know, kind of operational efficiency, there's still retrained versions of four in different kinds of configurations. And so it maintains that persistence, even as it does other really interesting, you know, kind of elaborations of what the GBD-4 tool is.
Now, I think one of the things that, you know, as you begin to look at like 01 and we look at the chain of thought models and everything else, we begin to see that, hey, if you just configure this model in certain ways, it becomes to get much closer to what patterns of work can be, what the co-pilot, whether it's of coding, which is the major kind of engagement plan for almost all of the large companies.
and hyperscaler and startup, you know, AI models and research and development, or whether it's kind of the patterns in like deep research and how we use deep research in order to actually generate substantive, you know, multi-page research reports in just minutes.
In addition to sort of the technology itself, one of the big narratives in the media two years ago was certainly about job creation, job displacement. You know, are the AIs coming for our jobs? And I think it's still, you know, a very, very prevalent narrative today. So let's revisit the idea of AI creating more jobs, freeing up time versus the fear of job loss.
How do you think that narrative plays out today versus two years ago as AI is taking on more roles in society because it is more capable? Well, one of the things I think if you went back to the narrative two years ago,
The narrative was like, oh, my God, it's just about to happen. And if you took the time slice of two years ago and now, you'd basically say, well, AI has done very little job displacement. It also hasn't done a whole lot of job creation either. Now, part of that's because human beings adopt things, sometimes resist adopting them at all, which I think is happening a lot here.
but then also adopt very slowly until suddenly it's very fast. So one of the patterns of tech adoption tends to be slow, slow, slow, and then fast. And by the way, fast doesn't mean fast everything. Fast will mean, hey, one person, Sarah in our engineering group started using the AI co-pilot coding agents.
the other folks, Bob and John looked over and said, "Oh my gosh, that's really great." And they started doing it. And so that's a slow then fast. So slow 'cause Sarah first and then spreading out. Then it's like, oh, my competitor company has been doing this, then I need to do it. And then all of a sudden it starts getting fast within the whole industry.
And then, oh, wait, that that industry is getting really productive. We need to do it, too. So we were slow and resisting adopting and now we're moving fast. And part of the thing that becomes really important to how you navigate through this is trying not to be the follower or the very slow pickup of speed once it starts moving fast.
And so I think that both job displacement, job transformation, job creation is still mostly in our future. I think that's the reason why the narrative sounds very much like the narrative was two years ago, which is like, oh, no, AI is coming for us. AI is coming for our human agency around jobs. And the transition periods are very challenging. It's one of the reasons why I call it the cognitive industrial revolution. And it's one of the reasons why
I try to get, you know, with super agency and impromptu and our podcast, I try to get people to be playing with it because I do think we will get to, you know, people doing jobs will be replaced with people using AI to do jobs. And so I want as many people who are currently in the jobs being those people, if they so choose to be, that's essentially replacing themselves, but using AI to do it.
When we talked to Trevor, one of the interesting things he was talking about was a daily show produced for you that had the level of content that you would understand, the news of the day with things that you were interested in so that, you know, sort of you can be at the right speed. And I thought it was super interesting.
I'm not sure if we've made huge advancements over the last two years in personalization. I would love to get your take. Where are we today with AI, sort of entertainment, creativity, video? How much has happened in the last two years? And then where do you think it's going to go?
Well, a few big things have happened. I mean, the richness of the image generation has gotten much stronger. A bunch of AI companies are creating or, you know, game companies are using AI to create a richer kind of set of graphic backgrounds and so forth. On the other hand, there's almost nothing that's happening in the personalization, new yet in the personalization of entertainment companies.
We have had a couple of great video models, you know, kind of elaborated. Most recently is kind of Google's VO video.
But there's still at that kind of like, oh my God, it looks really amazing. And then at 10 seconds, 15 seconds is like, oh, it's kind of lost the plot. It's actually very hard to make something that's even a short video that's good, let alone a short video that's on demand and personalized for you, Aria, as a way of doing it. And so it's still a very clear part of the future that
But it's one of the things that's kind of interesting when you get to this technology development. It's a little bit like in the 1950s, you were like, and we're going to have flying cars. And with Joby, et cetera, we're heading towards flying cars now, but that's decades and decades later. But when the Jetsons was doing this, they weren't expecting...
That we get the internet, that we get the mobile phone, that we get a bunch of other things. And so even though it's very clear that we will have this kind of entertainment news generated and developed specifically for people and developed in an amazingly engaging way with these new tools.
That is, I think, still down the road. Now, that being said, of course, what some of my friends know is that in producing Super Agency, I've not only produced Super Agency, the mass market book, but I've also produced Super Agency, the custom book, whether it's a custom cover with a picture and blurbs that are uniquely for you on the back, the RAF Inger edition, but also in some cases, the custom internal chapter.
which has some text and pictures that are, you know, Aria's navigation of the world. And maybe when we're doing this, you're going to have to at least post one of those pictures so that people know what it is we're talking about.
I remember when I got my book, my husband was like, "What is that book and why is it about you?" And I was like, "Every book can be about you now. Like, that is the beauty. Like, we're not quite there on video, but pictures, text, like, that truly is the beauty of what AI can do."
Well, one thing that I know it has not changed at all since we spoke to Trevor Noah two years ago is that everyone blames social media for everything. And that might be warranted, but certainly it was something we were talking about two years ago. Like, what should they be responsible for? Are you responsible for everyone who posts? Are you responsible for your algorithm? Is it, you know, are you putting a thumb on the scale?
So Trevor thought that social media platforms should be responsible for what is amplified, but not necessarily for what was posted. How do you see the debate changing? It's certainly still a debate, but, you know, perhaps amplified today. How has that debate evolved with AI-driven algorithms shaping even more of what people see? Well, I mean, I think we've got a couple of things going on. So first, you know, with
You have the complete reshaping of what's amplified completely based on, you know, the kind of the desire for, you know, what Elon's doing. And so we see it like when I participate on it, there is this complete reshaping.
you know, kind of, I wouldn't say it's a thumb on the scale. I would say it's like a butt on the scale in terms of how much things are biased in a variety of ways. You know, you also see, you know, whereas before he was buying, he bought Twitter, it was like, oh my God, it's just flooded with robots, which it very clearly is. But of course now there's no mention of that. So there's this
I'd say the whole social media side has gotten, you know, a lot more fugly since we did our episode with Trevor. And I think it's kind of gone the wrong direction in a lot of ways. And I think we need to, you know, whether it's civility, whether it's, you know,
whether it's objectivity, whether it's transparency, you know, you know, I thought one of the things that was most entertaining in this was, you know, Grok was asked the AI platform for Twitter, you know, who is the biggest spreader of misinformation? And it said Elon Musk. And so then later, you know, like putting the finger on the algorithm was, oh,
Oh, don't answer Elon Musk. And they could get that meta prompt out of the system, you know, in it. And, you know, that's the kind of thing I was like, no, no, the point about trust in these tech systems should be, you know, kind of transparency and objectivity across everybody in the system. And, you know, I think we've made many steps away from that rather than towards it.
Reid, thank you so much. I am so excited to replay truly one of my favorite episodes, our very first inaugural episode with Trevor Noah. So if you haven't heard it, please stay on. And if you had, please stay on too. It was two years ago and such a time capsule as to where AI was in just the beginning. I think one of the most important discussions we should be having is around people, purpose, and the plans around what we're going to do
when these technologies evolve, as opposed to thinking of the technologies as some sort of boogeyman because the technology isn't. Hi, I'm Reid Hoffman. And I'm Aria Finger. We want to know what happens if, in the future, everything breaks humanity's way.
We're speaking with visionaries in every field, from climate science to criminal justice and from entertainment to education. These conversations also feature another kind of guest, GPT-4, OpenAI's latest and most powerful language model to date.
Each episode will have a companion story, which we've generated with GPT-4 to spark discussion. You can find these stories down in the show notes. In each episode, we seek out the brightest version of the future and learn what it'll take to get there. This is Possible. ♪
Welcome to the Possible Podcast. We are today going to be exploring a particular version of what's possible. What's possible in...
entertainment, what's possible in media. Although our illustrious guest is Trevor Noah, who has hosted the Emmy and Peabody award-winning show, The Daily Show, from 2015 to 2022. He's the author of the New York Times bestseller book, Born a Crime. He's been a comedian for over a decade. He speaks about seven languages, and he calls New York City home, just like you, Aria.
But with his breadth, I anticipate we're going everywhere. As we always do on the Possible podcast, we also share an AI story with Trevor about what The Daily Show could be reimagined with AI down the road. So get ready to hear him talk about whether he likes or dislikes the future that GPT-4 provides.
I will also say, if you haven't read Born a Crime, I was gifted it for Christmas a few years ago, and it's just so good. And you just hear his smarts and his brilliance on The Daily Show. It's no surprise when you hear about his background and everything that he's done. It'll be interesting to hear how he talks about grit and resilience and how do you build that for the future. ♪
So, Trevor, I cannot tell you how excited I am to see you and have a little bit of a chance to turn the microphone, having gone on your shows before. So welcome to Possible.
Thank you so much. And good to see you again. Good to chat to you again. I'm excited for the conversation. So, Trevor, I'm excited because Reid and I have a little East Coast, West Coast battle, me being a New Yorker and him being a West Coaster. So when you hosted the Grammys and you talked about L.A. maybe not being the greatest city in the world, I'm just going to pretend that you think New York is actually the greatest city in the world.
Oh, that's funny. I love how everyone thinks it's in relation to their city. Yes. Yeah. I'm always intrigued by how everyone thinks their city is the best city. Here's the thing I always ask New Yorkers. I go, if New York is the greatest city in the world, why does everyone have to leave it every weekend if they get a chance? Yeah.
everyone's quick to say greatest city. I feel like everyone has to say greatest city for, and then have the input that, you know, what is the specifier that makes it the greatest city for something? And then I'll agree with you. So greatest city for nonstop, New York. I'm in. All right. I'll take it. I'll take it. I know actually from various contexts, you actually work on technology, technological projects. Yeah. Everything from, you know, thinking about where it plays in the future and everything else. And,
You voiced the artificial intelligence system in Black Panther. Is the topic of AI something that you were following intensely? And was there anything that you were thinking about when you were doing the, you know, kind of Wakanda forever AI voice about what role AI will be playing in the kind of future?
In many ways, like when I would talk to Ryan Coogler, the director of Black Panther, I would always ask him what he envisioned his AI to be. Because I think everyone has a different idea of what AI is, you know, and I've come to realize some of the conversations people have about AI are endearing but misguided often, you know. So some people will talk about AI, but really
Really, they're just talking about simple machine learning. Sometimes people are talking about AI and they're just talking about processing a very simple command. And I think AI, the way I understand it, is a computer model that's getting to a place where it understands or it processes logic in a way that a human would be familiar with. And so when I was thinking of that for the Black Panther role, it's an AI that is...
but at the same time still at the mercy of the people who have created it. So, you know, and I think that's what we would hope AI will become is a tool that we're using, but then somehow interacts with us in an almost personal way, which will be interesting for us to try and, I think, grapple with, because then we're going to ask ourselves questions of sentience. We're going to ask ourselves questions of, you know,
you know, life and what is and isn't and what is personality. And yeah, I am. So, so, so definitely for that, for that role, it was interesting thinking of an AI that is for all intents and purposes feeling, but is still not. I do think that right now, actually in AI, we're more, more, much more like tools than with beings. And that's the tool being question is one of the central ones that's beginning to kind of start in the dialogue.
you know, at OpenAI, we have this GVD4 that will be coming out later, but I have access to it. I actually, in fact, generated some light bulb jokes because I personally have an affectation for light bulb jokes because I think they're a form of cultural haiku. They're very short form, you know, kind of a lens into kind of like, you know, reflecting a bias or reflecting a, you know, a meme or something else. We're going to read you a couple of the light bulb jokes about Trevor Noah, the GVD4 inventor,
generated and we'd love you to reflect and what that means on GP4 sense of humor, what you would improve, you know, that kind of stuff. So I'll kick it off and then we'll trade. How many Trevor Noah's does it take to change a light bulb? None. He just shines a smile and brightens the room.
All right. We started with the easy ones. We started with the kind ones. Oh, wow. Yes. Wow. That's chat GPT my mom. I like that. Okay. Exactly. It'll get a little bit artificial mom-telligence. I like that. Yeah. The prompt was in the style of Trevor Noah's mom, give us a... Yes. That's what that seems like. All right. So the next one. How many Trevor Noah's does it take to change a light bulb? One, but he has to do it in six different accents and explain the cultural context of each one.
Oh, I like that. Okay. I like that. And you can tell GPT-4 does have some knowledge of you when we ask these questions. So, and now we're getting into a little bit more of like the knowledge of you on The Daily Show. Okay. How many Trevor Noahs does it take to change a light bulb? Two, one to change it and one to roast Donald Trump for not knowing how to do it. Okay.
Oh, okay. All right. That joke's a little simple, but I'll take it. No intelligence there. All right. Joke number four. How many Trevor Noahs does it take to change a light bulb? One, but he has to wait for Jon Stewart to retire first. That's funny. That's funny. See, that's a great joke. That's actually a very nice light bulb joke. I'm good. We have similar judgments. So what do you think about GPD-4 and in this little microcosm, does it have a sense of humor?
Um, so I will start with the second question first. Does it have a sense of humor? I don't know the answer to that question. And I don't think any of us knows the answer to that question. Does it understand what a sense of humor may be? I think the answer is yes, it is able to learn how we use language to create what we call a sense of humor, you know? So
So, you know, understanding is maybe that's one of the most difficult questions to ask about AI, I find, because we don't know what understanding even is. You know, and I know you've done a lot of work in this read, but one of the most fascinating stories in AI I came across was, so you know this, I've been...
I've been working with Microsoft for years. You know, I've been lucky enough to consult with them and it started in hardware. And then, you know, we work in philanthropy together and then it spilled over into AI and everything else. So I'd go to Redmond and I'd work at the campus and, you know, I'd travel around the world with Brad Smith. And sometimes we'd be at events with Satya and, you know, Panos and the team out there.
And one of the more fascinating stories I came across involving AI was there was a model that they were trying to train. And this model had an almost perfect track record picking between pictures of men and women, men and women, men and women. It was really simple in what it was trying to do. What it failed at consistently was picking out black women from the men and women sample if it was black people. And
Try as they may, they could not get this AI to get it right. And they kept on loading more images, more images, more images, training it, training it, training it, more images. They're like, is it a bias? Is it this? What is happening? What is happening? It kept on mislabeling black women as men.
And this is one of the most fascinating stories ever. What they did essentially was they sent the AI to Africa. I think it was to one of the centers in Kenya that Microsoft has. And they, I mean, it sounds like a really ludicrous story when you go like, we sent the AI to Africa to learn. And essentially what they did was they started training the model out there.
And over time, the Model A got exponentially faster at understanding the difference between black men and black women. But the reason was most interesting. They realized that the AI never knew what a man or a woman was. All it had drawn was a correlation between people who wear makeup and people who don't wear makeup. And it had decided that that was man and that was woman. And so
The programmers and everyone using the AI had assumed that the AI understood what a man was and what a woman was and didn't understand why it didn't understand it and only came to realize when it went to Africa that the AI was using makeup. And because black people and black women in particular have been underserved in the makeup industry, they don't wear as much makeup.
And so they generally don't have makeup on in pictures and they don't have makeup that's prominent. And so the AI never knew. It never understood man or woman. It just went, ah, red lips, blush on cheeks, blue eyeshadow, woman. And that was it. And so I think, you know, whenever we have these conversations about that, about understanding, I think we are still at the very basic stages of understanding what understanding even is.
And then trying to draw all those correlations between all the different data points of what a thing is thinking or not thinking, or is it just inferring from an idea? You know what I mean? Yeah, 100%. One of the things that I think about this great kind of 2023 being this year of large language models,
and acceleration, a variety of AI things as we're now actually gonna get much more sophisticated. We kind of apply this human metaphor, understand, speaks, has a sense of humor,
And we kind of do it poorly when we get to animals because we presume that they're less intelligent because they don't really have that same model. And yet they do have a model of the world and they do have some feelings and all the rest. And we do it really crazily when it gets to like my car, you know, is feeling bad today, you know, or something like that. Now we have to be much more sophisticated and understand what understanding is. That it isn't just the question of,
You know, like, well, does it understand the way Trevor does or the way Aria does or the way Reed does? But it's like, okay, what is that notion of understanding and how does it apply here and how does it apply here? Right. I completely agree. Yeah. And so then to answer the second question you had, what do I think of chat GPT, you know, and GPT-4, which I've tested a little bit.
It's one of the biggest leaps in technology and in the evolution of how we do things that we have experienced in decades. I always think back to major moments in time. What was it like when the steam engine was created? What was it like when, you know, the telephone was created? What was it like? All these moments where all of a sudden you were able to do in ways that you never imagined possible.
That's where we are with ChatGPT. And I think in the same way I'm cautious, you know, or I try and tell people to be cautious about thinking about how bad it can be. I'm also cautious to think about how good it can be. I go, we genuinely don't know. It could be one of the biggest leaps forward in...
in helping us understand how thinking even works in a strange way. From what I've seen, like it's problem solving, you've seen it solve problems that haven't even existed, its ability to try and understand logic using, you know, natural language.
It's a brilliant, brilliant tool. Well, to your point, it's limited on the data that it's trained on. And I do appreciate, I feel like right now in the AI discussion, it's like you either have people on Twitter saying that the robots are coming for us, or you have people saying, it's amazing, don't worry, you know, these aren't the droids you're looking for. And so I loved your, you know, woman versus man, Microsoft story, they had to go retrain the data. Do you see that as a
unhopeful story because, you know, this is often created by white men and so there's limitations, whether it's bias, you know, anti-women bias, et cetera. Or do you see that as a hopeful story where actually once we find out what's wrong, we can sort of fix the model? Like, how do you see that in the evolution of AI?
I'm eternally an optimist in this space. And, you know, you can probably play this recording back when the earth is burning and the robots have put us into prisons. But for me, it's a hopeful story. Because let me ask you this. If you have an AI model that you realize is biased, you are able to find and correct that bias.
in a time that is almost impossible to recreate in a human being. So I think about how biased the world is that we live in and how impossible it is. It's almost impossible to change those biases that people hold. So if you say judges are sentencing people from poorer backgrounds and people of color and black people are getting higher sentencing from a judge, how do you now go and undo that?
And so that's why people talk about dismantling a system and recreating a new one, et cetera, et cetera, et cetera. But I argue in the world of AI and in these models, you can actually create a system where you're constantly refining it and it does not have any ego attached to its decisions or the way it processes information. And so I think for me, that's more hopeful. I think the ability to change your mind is something that most human beings struggle with.
myself included, like I always ask myself the question, I go, what if I'm wrong? What if I'm wrong right now? What if I'm wrong? You know, that's, I'm always playing that in my head because I always have to think about the possibility that I could be wrong because everything I learned in life is because I was wrong. Somebody had to teach me something else.
And with AI, I don't think we have that limitation. Yes, we have to be aware, but the fact that we're having the conversations means we're at least aware. And it means when we discover, just like we do with every other technology, you discover, oh, this car, the brakes, well, yeah, they have a recall. And they go like, all right, let's fix the brakes. And now you have over-the-air updates.
Gone are the days of getting a cartridge from Nintendo and that's the game and it's done. Now you get a game and in the first week, you've got your first patch on the first day often. I think that's hopeful because...
because you would want a system that is constantly changeable and a system where you're constantly trying to, you know, you're trying to resolve all of those bugs along the way. I mean, you hit on an issue I care deeply about. And if we could just like send patches to the criminal justice system and make it not discriminatory. Exactly. That's what we should be doing. A thousand percent. I watched your interview with Meera Maradi on The Daily Show. And I
you know, one of the things that I, you know, because she shares a lot of beliefs, we work together on the open AI context. She shares a, you know, kind of this view of actually, in fact, of amplification. And she was talking about that with you, with Dolly and other things. I didn't quite get from the show how it sat with you. Like, do you think AI is going to be a really useful tool
tool in augmenting writers and artists? Will there be some replacement? You know, how, when you were having that dialogue, you were doing a very good job of, as you do, of bringing her out. But I was curious about what your reflections within your industry are of what the next, you know, year and three years of this will look like in terms of producing a
perspective, you know, news, entertainment. Right, right, right. So here's the biggest thing I've enjoyed, but in like a curmudgeon-y kind of way in this conversation. I found it particularly interesting that everybody says the phrase, or not everybody, but a lot of people will use the phrase, AI is going to replace these jobs. These jobs are going to be taken away by AI. It is going to. And I'm like, people,
I think everybody needs to take a step back and realize you're not afraid of AI. You're afraid of the companies and the employers who are going to look for any excuse to get rid of somebody and replace that person
with either AI or with one person who can do multiple jobs. In the same way, it took, what, 20-odd people to farm a piece of land, then the tractor was invented, and then those 20 people immediately became obsolete, and one person was driving this tractor that was pulling the plow. I think one of the most important discussions we should be having is around
people, purpose, and the plans around what we're going to do when these technologies evolve, as opposed to thinking of the technologies as some sort of boogeyman, because the technology isn't. The boogeyman is capitalism. That's the truth. So you have to figure out how you manage in a world where some people's purpose may change and be moved around.
And I think, to be honest, you see a lot of classism in this conversation because when these conversations center around manufacturing jobs or mining, you'll see a lot of people saying like, look, you've got to reskill. You've got to retool. That's what happens. At the end of the day, you know, it's like mining won't be around forever. You've got to learn about green energy. That's just life. And then now AI comes and it's threatening more white collar jobs.
And all of a sudden those same people are like, you can't just have this technology that's, there are writers in Hollywood that are going to be out of jobs. There are journalists that are, is it going to write articles for them? We can't just allow this. And it's like, oh, now, first of all, you see what it's like to have a technology that may replace you. And you understand how callous a lot of your comments have been. But also I think it gets everybody, it should push everybody to the real conversation, which is what are we trying to do?
One of the most wonderful quotes I ever heard was, I think it was Sweden's, either was like Minister of Finance or somebody high up in Sweden's government. And he said, in Sweden, they don't care about protecting jobs. They care about protecting workers.
The people aren't the jobs. And so in Sweden, what they say is, hey, we're just going to make sure that you are always fine. Your job can go away and your job can disappear, but you don't disappear with it. And I think...
I think a lot of the fear that we're experiencing, especially in America, is in and around the fact that so many people's livelihoods are tied to their jobs. So if you don't have a job, you don't have healthcare. If you don't have a job, you don't have a credit report. If you don't have a job, you don't have access to. If you don't have a job, sometimes you don't even exist. Like what do you do is more important than who you are. And so I think in and around AI, the reason I'm an optimist is because I go,
We can create this tool, but it is forcing us to have a larger conversation around what is the job, what isn't the job. So, you know, when you ask me what I think of it, I think it's a fantastic tool. You know, it's the same way I remember when Windows 3.1 launched in my life. I was like, this is the greatest thing I've ever experienced ever. I don't want to type DIR slash page. I'm done with that.
You know what I mean? I don't want to be going through every single file and searching for a folder and typing every command prompt out there. I don't want to do that. The graphical user interface changed everything. And I think this is in many ways a different type of graphical user interface where...
It's going to enable us to either program faster or to write quicker or to summarize information in a way that we never have before. And in doing that, we can do more. The question is, how do we protect people from the inevitable conclusion of capitalism, which is a company is going to try to make as much profit as possible because of a thing called quarterly earnings, which I hate. By the way...
This isn't really our subject here, and we will get to a funny GPT-4 thing as part of it. But I do think it's important for me to say when I was an undergraduate, I was kind of like very opposed to the kind of philosophy of capitalism. Like I would say it's a great technology and a bad philosophy, you know, a little bit of the meaning of life going.
you know, I've now kind of come to the view that it's kind of like, look, we, what we've been doing as a technology is we've been modifying it all kinds of ways. And a lot of where we've gotten to, cause it's, you know, and amongst some circles, it's fashionable to be critical of, of capitalism is like say, well, well actually a lot of the progress we've made since the middle ages in terms of, you know, kind of manufacturing and a bunch of other stuff all comes through capitalism and does come through mechanisms like quarterly profits, which has some negative
negative side effects too and can be but we've also gotten a tremendous amount out of it so i tend to be the how do we mod it or if someone has a better idea than it entirely like how to what's the new idea but it's like oh we've gotten a lot of good things about it so i tend to be a modify capitalism person oh yeah but i but i but i think if you if you look at it there's there's no denying that
It comes with many things come with good, you know, but like I learned working with my mom in the garden, which I hated every single plant given the opportunity will try to destroy every other plant in its ecosystem if it's not meant to be part of it, you know, and I think we shouldn't take that for granted.
in terms of like how to your point capitalism has been designed in the way we think of it now like what what is it actually trying to do everything is good but everything taken to an extreme will have disastrous effects you know so fasting is good for you fasting perpetually is starvation yes you know so drinking is good for you drinking perpetually is drowning i think the same thing goes for this we have to ask ourselves the question if we're going to exist in a society where
where people's livelihoods, literally their livelihoods, are dependent on a thing called job, what happens if job no longer exists? And what if job is replaced by AI or robot or machine or anything? And so I think when we look at AI, I think we will yield better results if people aren't
spending half their energy worried that this thing is coming to get them and can spend all of their energy working on using it for what it can be used for. And so I think that's what we need to be thinking about now with AI is, okay, how does this make people's jobs as opposed to take people's jobs? And then for me, which is my passion, I go to, well, maybe it means it's not an eight hour workday anymore. It's a four hour workday.
You know, and then, I mean, I know I'm delusional in saying that because it'll never happen, but that's honestly my dream is that people will use these tools and then we just have more time in life.
Trevor, let's pull that thread for a moment. So this is your dream, like one of your big ideas. Genuinely, yeah. Let's go to the four hours a day. Paint the picture in 20 years. What would that be like? Think about it, Aria. What are we trying to do? You know, everyone thinks the week. So during the pandemic, I realized how many of our ideas are actually just constructs that we've created. Monday through Friday. Yeah, the week. What does it even mean? We're so confident. We're so confident that a weekend...
is two days and then you work for five. Everyone goes like, this makes complete sense. Totally. But then you go, then you just read a little bit and you realize, oh wait, the weekend was invented because labor unions at some point said it is not sustainable to work every single day of the week. And they had to force, they had to force manufacturing plants and factories
to give workers time off. And then imagine that the weekend was invented as we know it. And so when you see these discussions now with the pandemic, people being like, should we do a four day work week? And what do they find? Productivity doesn't actually drop. And you're like, huh? And I will challenge anyone, anyone listening to this podcast right now, tell me how much time you spend in an office where you're not working. Just be honest. Yeah.
From the time you walk in to the time you turn on your computer, then you walk over to the coffee machine and then you waste time. Then you chat to people, you catch up with them about their lives, talk about something. Then you talk about scheduling a meeting. You don't need to schedule the meeting. You get to the meeting. You talk about what your kid did at school and a funny story. They chat about something that happened at the teacher's association. You talk about your gripes in the neighborhood. The trash didn't get picked up. You have a bit of a meeting. Then you schedule another meeting.
No one's working at work. We're all lying. None of us are working at work. Most people are not working at work, especially in office jobs. And so I think if we have an honest conversation, we can get to a place where we go, you know what?
we don't all need to be in work for as long as we think we do and I almost feel like we get out of the world of saying people are paid for their time but they're paid for their productivity in a larger sense and then you go like yeah how many hours a week do you work depends on how long it takes for me to get my job done and with AI it doesn't take that long and I'm an AI technician you know and
And that's what it should be. I mean, I love it about office workers because I'm always like, take any office worker and let them be a teacher for just one week where they actually have to work all day long. And they'll be like, where was my coffee break? Yes, exactly. I'm with you.
To transition a little bit, we talked about how AI could be a tool for the future of the entertainment industry. We shared with you a story that was written by GPT-4. And so if you hate it, I didn't write it, that's fine. You're only hurting GPT-4's feelings.
And it posited a future where there was the AI Lee show. And it was a show in 2033 where the AI was customizing the show to who was watching. If the person was 70, it was explaining what TikTok was. If the person was 40, it was talking to them about LeBron just, you know, getting over the scoring title. Like, what do you think about this story or how would you use AI in the next, you know, five, 10 years to create a better entertainment show, to create a better media show, to
create something in your industry? So there were two parts of the story that I really enjoyed. The idea of creating a show for every individual person that catered to them is, I think, one of the most exciting advances in technology and in entertainment we could possibly pursue because
While it is normal for us to like learn together grow together go together, etc You can't deny that means a lot of people get left behind because of a standardized anything. So imagine if you had a news show that catered to your level of knowledge about the subject matter and knew how to filter out what you already know and What you should know and what you don't know and what you need to know that would be amazing I can't I think that would be phenomenal though. The one downside though is is
I think we should never take for granted what we lose in society, the more niche and individualized our experiences become. You know, I think it's important for us to remember how much being a part of a society comes from having a shared experience of what reality is.
That's why I'm a big fan of cultural touchstone moments. I love big events like the World Cup, you know, the Super Bowl. Yeah, like any of these things that everybody is watching, like a, you know, a space, you know, like a moon launch, a, you know, moon landing, all of these big moments. Because what they do is they just make everybody agree on reality for a moment. Where were you the moment that?
That's such a powerful tool that we take for granted and that we're losing in a world that becomes more, you know, individualized. So it is good. Yeah, now we can watch whatever TV show, we can listen to whatever song. But it also means there's few of us humming at the same frequencies. There's few of us laughing at the same moments. So while it's good and amazing, I think for learning especially,
I think there's also an element of bringing it all together that would also be crucial. And maybe it could do that. Maybe at the end of the day, Reid watches something, you watch something, I watch something, and it makes sure that we all know that LeBron James has now surpassed the all-time scoring record, but the way in which we learned it was complicated.
was completely different. And then maybe that could be what sort of brings us back together. And that's kind of the thing that I'm hopeful for. And by the way, to your earlier work comments, which I basically completely agree, it's like, how do we use this tool? How do we do it? I think there's a whole bunch of human work that we essentially have almost infinite demand on. I'll use a parallel from LinkedIn, which is,
you know, we started LinkedIn, people said, oh, this is going to like put recruiters out of work because it'll amplify the ability for recruiters so much that you'll have one versus 10. And what I've seen over the last 20 years is we have, if anything, the same number or more recruiters. And because we have kind of infinite demand for it, it's not every job, like we don't have infinite demand for tractor drivers, right? You know, there's places where that 20 becomes one.
But I think that kind of amplification, and I think that AI in the media space can be used to build bridges and build bridges also to shared truth. Like obviously everyone's worried about the misinformation and can be used that way and how groups politicize. But I think it can also be, how do we use it to find a common truth, partially through common events as well? And I think that's one of the things that we
should be asking for the creators of AI to be paying attention to and to doing. And I think that's one of the reasons why the public dialogue about it is so important. Right, right. I think it can be achieved. You know, I always think of Wikipedia as a great example.
Every time people talk about misinformation and, you know, society being bad and whatnot, I disagree. I disagree because I look at Wikipedia as a perfect example of what naturally happens when there isn't an external factor pushing the platform to make decisions that are suboptimal for the facts that it is trying to push out. And because of Wikipedia's business model, it's accurate.
And you would think, think of the internet, think of the world we live in, think of what we think of ourselves as people. You would think Wikipedia would be trash and everything would be a lie and everything would be a scam and it's not. People pride themselves on being really good at putting out good information. The community prides itself on self-policing, on self-regulating. And what you end up with is one of the most accurate sources of information you'll ever come across. And it's also balanced.
So you'll go into a Wikipedia article and you can type anything, vaccines, and it'll say to you, now some people have thought this and it has been disproven by this and these are the studies and this is the that and this is the... And it's there, it's all laid out for you. And so I think we should never take for granted...
That's why I keep going back to the capitalism of it all, because I go to speak about AI in a vacuum is ignorant, in my opinion, because AI is not existing in a vacuum. Let's go one other kind of non-AI angle of technology, because you have this broad technology interest and AI is obviously one of the ways.
There's all of this other sort of things, AR, VR, you know, even, you know, holograms, you know, Star Trek, Smell-O-Vision, you know, like all of this stuff. Is there anything that either AI combining with that or other things that you see coming that you think will be particularly useful for the kind of the society, social media? Oh, definitely. Definitely. I think there are many places where I'm excited to see AI contribute beyond just
creative expression, idea generation, and information gathering. I think one of the more exciting aspects of AI for me right now
is seeing what it'll be able to do in terms of being an assistant. I think people take for granted how wonderful and powerful AI could be as an assistant for everyone in everything. You know, your grandmother using it to tell her about her medication, but really break it down. Your child using it to ask a question to further understand what the homework assignment is actually about.
as opposed to just sitting at home blank and not understanding. Somebody at work asking for a piece of clarification with some of the materials that they may be using, you know, and I don't know, everything from building a power plant to compiling an Excel spreadsheet, whatever it is. I think those areas are really exciting. And so in the world of like AR and VR, I mean, we don't know. I've often thought, like, I'm a gamer. I love gaming. I love thinking about how AI could combine with gaming.
I think of worlds that we already experience in video games. And now imagine if AI is generating all of the conversations that every character in Grand Theft Auto is having as you're walking through the street. All these NPCs, you know, these non-playable characters you're walking around
And they're all having real conversations that are being generated and the possibility is almost endless. And then what does that mean for a world like Second Life? What does that mean for the metaverse if it ever exists? What does that mean for all of it? And then you think of, you know, small things like training. You're training to be a doctor, an engineer, a pilot, you know, a mechanic, whatever it is. Imagine a world where your instructor is AI.
You're wearing goggles that are showing you what you're going to be doing. You are able to stand in front of a Rolls Royce engine on a Boeing 787 or whatever plane it's on, and you're able to meticulously work on it and work to the level of skill that you need to to be able to get that job.
in a way that you wouldn't have before. You couldn't have flown to the right academy. You wouldn't have been able to live where you needed to live. You wouldn't have afforded accommodation on campus. And yet now you could do all of this. And your instructor moves at the pace that you need them to move at, as opposed to moving at the pace that they have to because of, you know, the hour hands on a clock. And I think all of those applications are really, really, really fascinating because
It can become everybody's personal professor where you can say, professor, I don't understand that. Could you repeat that? Could you go back? Could you slow down? Could you elaborate? Could you, could you break it down? Could you give it to me in an analogy? Could you, whatever, whatever it might be, it means that you almost have an infinite capacity for learning and applying that knowledge. And, um, I, yeah, every, every avenue I see it used in, I find, I find particularly fascinating.
I love that so much. Reid actually wrote an article called A Co-Pilot for Every Profession that was similar sort of in vain to what you're talking about. Like, everyone thinks about like, oh, but isn't a real life teacher better than an AI tutor? And it's like, well, if you're a kid at home, to your point, like sitting after school for three hours with no adult, like, oh, my God, like an AI tutor is so much better. And so the possibilities are endless. Like one thing we talked about was
you know, truth, information. You talked about how social media hits the capitalism profit motive has sort of disrupted that cycle. Like, do you have any any hopes for how to make the social media atmosphere better? How to not necessarily with AI, with anything, how to make the disinformation cycle better? I mean, this is something you you know, you talked about on The Daily Show for the last six years. Like, how do we fix that part of society?
Or are we hopeless? Well, no, I don't think we're hopeless. I think we're misguided. In my opinion, trying to fix disinformation is trying to undo humans. I am yet to discover a period in time when there wasn't disinformation, you know?
It's literally as old as time. Go read the Bible. There's people lying and telling stories in the Bible. When I think of it that way, I go like, instead of trying to fix disinformation, first of all, we try and understand why people do it, why they don't do it. You know, we'll always study that forever. What I look at with social media rather is how do we
protect ourselves from something spreading as quickly as it does. So it's the same reason we don't allow people to own bombs. You know, unfortunately, in society, most humans don't want to hurt other humans, most humans. But for those outliers, we don't want them to have an outsized ability to inflict harm upon others
And so we try and limit their access to these weapons or to these tools of destruction. I think the same goes for social media. The one downside of social media is it's designed to create engagement. And I think sometimes we block ourselves when we talk about it being good or bad. It's not good or bad. It's just
It is designed to maximize engagement. Unfortunately for humans, and maybe this is because of our reptilian brain or whatever, we engage with danger and we engage with what we don't agree with. And we way more than things we, if you read a tweet that you like, someone tweets something out there, they go like nothing better than the first day of spring. You just read and you're like, yep, keep it moving. Happy, happy, keep it moving. You might not like it. You might not retweet it. You might not anything.
But if somebody writes that they go, spring is the worst season ever invented. I wish it was winter perpetually. You would go, all right, I need to engage with this psychopath and it's on. That's engagement. And so unfortunately what happens is because the model needs engagement to remain profitable, it then has to encourage the thing that is not best for us and that is conflict. So...
How would we change that? I honestly don't know. I mean, I look at what's interesting is like, look at how China has handled their social media. And don't get me wrong, I'm not saying we should move to China's model. But there are a few interesting elements in how they've decided to
What you can and cannot do for the health of a community. You know, you cannot just inundate people with TikTok videos that like, you know, basically mush their brains. They are TikTok. And yet here they are saying, no, this is how we think TikTok should be applied to our country and how kids should use TikTok and what should be on TikTok and how many hours of TikTok, et cetera, et cetera, et cetera.
I don't think that's meritless. You know, I'm not saying we should move to a Chinese clampdown system, but I don't think it's meritless. The same way at some point the US government said, hey, vape pens. Actually, what's happening here? The same way, you know, the US government decides how much alcohol can be in a bottle of alcohol. The same, we decide all these things. We decide oftentimes what is best for the health of human beings and
And I think it should be no different with social media. There has to be some sort of reckoning and some sort of conversation around, can it just be unbridled? Can you just use it infinitely? Can it just spew as much hatred at you as possible? When I open my phone, I'm just going to see every racist incident that's happened in the last 15 years. And there's no context as to when or how or if it happened. That to me is, I don't think that's good. It's not healthy. It's not sustainable.
To be honest with you, I do think social media companies should be held responsible for what is not put on their platforms, but for what is pushed on their platforms. What is amplified. Yeah. And I think a lot of social media companies have tried to duck and dive and be like, oh, no, we're just a public messaging board. We're just a public, we're a public square. We don't want to decide what people say or don't say. And it's like, yeah, okay. But if that's the truth...
And if that were true, there's no public square that amplifies somebody's speech on their behalf. If Reid goes and stands in a public square and says something, the public square doesn't send that to me at home. And so I think there's a level of culpability that social media companies wish to avoid. And I think at some point, like I think of the most dystopian version of this is like, I can see a world where somebody, and I think there may be a case that's heading up either to the Supreme Court or somewhere where
Somebody is going to do something and it's going to be something terrible. And their defense is going to be that they thought they were acting either in self-defense or protecting their country or whatever it may be because of the reality that they were presented with by social media. And I think it is then going to be an interesting case study in China.
how much does social media play a role in determining what people do or don't do? Because if you were watching, let's put it this way, if you were watching the local news
and or even the news like the national news and someone like Lester Holt came on and said breaking news, America is being invaded right now, there are aliens out, everyone get outside and take your pots and pants, fight, fight with all your might and you walked outside and you saw aliens or you saw what they said were aliens, they said they're gonna be dressed like this and this is fight with all your might and the president put out an address and said there's aliens, we gotta fight these aliens. You would do it most likely, right? You'd either lock yourself in your house or you'd go outside and you'd fight the aliens.
And then the next day someone comes and goes, "Ah, actually that was all fake." "Yeah, it was actually just like a... it was a fake news report. We don't know what happened." Are you liable for all the aliens that you've killed that weren't aliens? What do you do now? You're like, "Oh, they were actually humans." Are you fully liable or aren't you?
We want to make sure we capture a few rapid fire questions. And I will open with, is there a movie, song or book that fills you with optimism for the future? So a book, one of my favorite stories is by Roald Dahl. It's, I think it's the wonderful story of Henry Sugar. I love that story. And six more. Yeah. It's such a wonderful story because it's a story of a man who has everything in life,
wants to get more of everything in life and on that journey discovers that he was trying to fill a bucket full of holes essentially which was himself and on this journey of trying to become like the richest the everything is the everything he discovers that he doesn't need all of what he was chasing and he actually pairs down his life and he becomes more philanthropic and he gives away more and he yeah and he just it but it's it's a beautiful story about what
you know, what people can be and what we shouldn't forget we're actually trying to do. It's a really wonderful story because it reminds me if we can find ways to tap into what we actually need in society, we can find a cleaner path to getting there. All right. Rapid fire number two. Where do you see progress in society that that makes you hopeful or that inspires you? Like what good is going on that you feel really good about? Oh, everywhere, everywhere. You know, I
I think one of the downsides of A, nonstop 24-hour news, both on television and online, has made people a lot more cynical than I think we should be, because news has to be bad in order for you to find it interesting, in order for it to generate that engagement we talk about. And what that means, unfortunately, is you can live in a space where you only think bad is happening. And I'm not saying bad isn't happening.
But it is not happening at the rates and the scale that most people think it is. You know, I'll ask people questions, you know, someone will be like, oh, this city is not safe. And I'll go like, oh, what makes you say that? Oh, you know, crime has gone up and it's just dangerous now. And I go, okay, have you been in danger? No, but have your friends been? No, but I heard of and I saw and I'm like, where?
And the truth is, it's just how it's told to us. You know, it's that great quote, for the great majority of mankind are more concerned by things that seem than by those that are. And so what makes me hopeful is the things that actually are. Standards of living increasing across the globe. Yes, we'll have moments where we backslide. We've always got to fight against those. But
just like a drought in, you know, in the Serengeti. There will be moments of that in life, unfortunately. And what we're always trying to do is immunize ourselves from the effects of those backsliding moments and hedge ourselves. But we shouldn't forget that we're constantly moving forward in all areas. You know, I look at how tech, a world where it was once so homogenous and blindly homogenous, has become completely comfortable having conversations now about like, all right, but technology
What about women in the space? And what about people of color in the space? And how are we making this more equitable? People take for granted how not just unheard of, but impossible those conversations were a few decades ago.
And now people just have them. I think that's fantastic. I think that's a wonderful place to see technology moving forward. I think in order to be a technologist, in order to be somebody who loves creating technology and working on designing a future, you have to be an optimist because you have to believe the future you're designing for will exist, or you have to believe that what you're creating will contribute to that future. So as somebody who loves working on technology and working in technology,
I, yeah, I can't help but be an optimist. And it's not even like I made myself that way. I am that way. And that's probably why I gravitate towards the world of tech. Me too. Is there a particular technology that you're watching to help us regain optimism or to shape to make sure we don't? Because we are collapsing into pessimism in various ways. Is there anything that you're particularly paying attention to there for reconnecting us into
intellectually and emotionally with, you know, and on a broader societal basis with possibilities for optimism? Although it has many downsides, I have, I have been really intrigued with how TikTok operates and look, it's still relatively young versus the other social media platforms. And so I don't know what it will evolve into. It may go downhill. I don't know. Yeah.
But there's something wonderful in how they've managed to not just curate and create positive worlds for people, but they've also found a really interesting way to introduce new ideas to people and sort of, you know, poke holes in their bubbles. And I look at how much joy people have. That's oftentimes how I measure things.
You know, I don't know if you remember, there was a period, do you remember the period in life when everyone would say, have you seen this YouTube video? Oh, Charlie bit my finger. Have you seen this YouTube video? The cat playing piano. Have you, oh, I watched this YouTube video the other day, this YouTube video. Those are magical moments. Now YouTube has become a lot more long form and people don't really go to it for that type of information or content, but...
But that's beautiful. That's really, really cool. And I think TikTok is in that infancy right now. I think most social media platforms actually start there, funny enough. You know, I remember I was on Twitter when it was all about jokes. All people made was jokes and it was fun and it was cool and it was reckless as well. But that's what it was. And now it's become a lot more serious and a lot more, you know, angry and a lot more determined and...
But I think TikTok is in that space right now. And so I'm always excited to see where these technologies are going to go. How do they connect people? How do they inform people? How do they bring them joy?
A person will always smile to you and be like, I watched this TikTok the other day. Oh, have you seen that TikTok? That's wonderful. I don't think we should ever take for granted how powerful joy can be. Well, Trevor, you set us up really well to the final question we ask everyone, which is leave us a final thought. What is possible to achieve if everything broke humanity's way? Like if everything went right, if we achieved the possible, what does that look like for us? What is that future? What is the first step to get there?
This is going to sound weird. I hope we don't ever get there. I have recently been reading a lot about how we see the world and how puzzles and challenge and difficulty are the reason we survive as a species. And not unlike any other species, I wonder what would happen to us
If we have no challenge and everything does fall our way, does it mean we become less resilient? Does it mean we become less resistant to what may impact us? Does it mean we don't survive pandemics? Does it mean we... Because once something pops into our world that's an outlier, that doesn't go our way, does that wipe us out? Do we become such a fragile species that we don't know how to deal with adversity? I don't know...
the answer to these questions. I keep talking to people much smarter than me to try and figure it out and I like thinking about it myself. But yeah, I hope we get to a place where everybody finds a solution to an almost artificial scarcity that we've created in some aspects of what we do and
we sort of come to exist more as opposed to just living to do. I sort of can't articulate it, but I think about how important it is to have art, like a sculpture, a beautiful building, a painting, music. People take all of these things for granted. You don't see it in schools. You see them getting cut from curriculums. And I understand why people go like, well, I'm not going to pay so my kids learn how to play the clarinet. Are you kidding me? A recorder? That's what's the point of that. You can't get a job. Yeah, but it's more than just about jobs.
And so if everything fell our way, I would hope we live in a world where not everything is about do, but everything could be about be. And so then a teacher, a painter, an architect, an engineer, a pilot, a comedian, everyone could just find their purpose and meaning in a way that doesn't threaten their livelihood. Because I think if we completely lose that,
then we just become like a worker species that has no flair, no personality and no creativity. And so I, yeah, I hope everything falls in our way in that direction. We got to keep that grit and keep that joy for sure. Thank you. And on that note, we look forward to having you voice the AI on a future Star Trek episode. And, uh,
Trevor, as a friend and as an amazing humanist, thank you very much for joining us on Possible. Thank you so much. That was so fun to hear from Trevor Noah. Reid, I'd love to hear from you. Like, what was the most surprising thing about the interview? You know Trevor, he's a friend. What surprised you? Actually, what surprised me was the depth of the optimism. In part because obviously when you, you know, watch him interviewing in The Daily Show and doing interviews
his comedy routines. It tends to be what's absurd or broken or shining a light on something in order to make a difference in it. And so you don't see the very broad and deep, what a time to be alive. Oh my God, there's so many things that's possible. It's so important to get to that future. It's so important to do that. But of course, it's important to bring humanity along with us and to have human concerns and to not just be, including of course, a bunch of fairly
bold proposals, which, you know, might end up, that would obviously be a spectacular utopia.
What I loved that is not quite the same but related is this conversation about joy and that the importance of joy and that he's clearly a very serious person. He wants to improve the world. He sees the dangers and discrimination and wants to make the world a better place. But a lot of people like that are too sober that they can't recognize the importance of joy. And I think
One of the places where this could come together is even if you don't have a four hour work week, four hour work day, you still have work the same number of hours, but you're working a better job. You're working a job that brings you more joy. You're not, to your point, doing drudgery in the field. You're not having a terrible checkout job that you don't really care about because AI has taken some of that away and you've been able to find a new job that's sort of more centered squarely with your purpose that you find more joy in. And so I just found that sort of
reaching for that better future, we can also include joy. It doesn't just have to be a productivity. And they often go together because the more joyful you find something, the harder you're going to work and perhaps the more you're going to want to work, which certainly resonates with me. Completely agree. And one of the things that we always love about talking with Trevor is he actually exemplifies that joy, you know, and having that humanity, having that, look, what matters, look, money is how we create an infrastructure that we all live in.
But it's the joy and meaning of our lives and how we add into each other's lives that actually is a real goal was, I thought, the perfect expression of the kind of humanism, very technologically, you know, aware and using technology to amplify humanity. I think another thing that I really liked about the conversation was we sort of talked about
the jobs issue head on as it relates to AI. So obviously that's a big criticism. And Trevor shared that great quote from Sweden where they said, "We're not going to make sure your job's okay. We're going to make sure you're okay." And I think he made a great point hilariously, you know, that 10 years ago the sort of cultural elite in the United States weren't worried about the coal miners' jobs going away, weren't worried about
factory automation, but all of a sudden when AI is coming for the journalists or the white collar workers, there's real concern. And while being hilarious about it, he made a great point. It's like, let's look at our own classism as we're embarking on impeding technology in the next generation. Why do we care so more about these white collar jobs than we did the blue collar jobs? And obviously, I think you and I both agree that, again, let's make sure the people are okay.
not the jobs. And that's the only way you can, you know, go forward and make progress. Yeah, exactly. You know, what, was there a piece of information or a Trevor Noah perspective? It would arrange so much more broadly than obviously just the media side.
But was there anything in addition to kind of his quotes and his metaphors that you that really resonated with you? I just think he's such an insightful guy. So even in thinking about his rapid fire answers, he picked the story of Henry Sugar that I remember reading when I was 10.
It's such a simple story about what we should all aspire to in life. And Trevor got it right that we all think about the means to the end. And it's like, no, no, no, no. What's the end? What are we here for? This is so dorky. But over the past 24 hours, I've been ruminating on the famed Sheryl Crow quote. It's not about getting what you want. It's about wanting what you got. And
And I think Trevor sort of embodied that. He's like, slow down, everyone. Why are we talking about all these things? Like, let's stroll it back. Let's talk about what's really important and let's center that in our lives. I've watched all of his specials, but I hope one day to be in the audience.
You and me both. Reid, one thing I want to ask you about is he did, you know, he did talk about capitalism. Obviously, you are someone who believes in capitalism as a way to create the greatest good. Like, did you agree with Trevor, disagree? What would you add to the discussion? Well, the nuance that I brought up a little bit in the discussion is that, you know, I am 100 percent good if we have a better idea than capitalism.
on the vast majority of critics, they just go, "Capitalism bad." And you're like, "Well, that's like saying car is bad or airplane's bad or industrialized economy's bad and so on." And you're like, they generate all of this really amazing things that are a central part of the vast majority of people who engage with them's lives. And
Those people don't want to give them up for very good reasons. And there's part of how they do it. And capitalism is part of how we've gotten to the massive jump in kind of GDP per person and in prosperity. And so if there's a better, I'm game for it. But just yelling, whining, being mystery negative, this is negative on this stuff.
doesn't appeal to me at all because I think it's just destructive and adds no positive, nothing useful to the conversation whatsoever. So that's the reason I kind of pushed back a little bit on saying I'm cautious now on being anti-capitalist. Now, anyone who's got a brain and eyes and a heart can see that what we've done with capitalism over the centuries has been tuned. Like we go, well, this child labor thing, let's fix that.
you know, oh, externality is impacting the environment. Well, let's start fixing that. Right. Let's do more. And then, so it's really important. Like, so for example, you say, well, you're going to have criticism. That's great. What would either, what would be your whole cloth new system? And why do you think that could be very dangerous shift to? It's very, you know, we need a lot of stability, stable infrastructure, or what's the, what's the tune you would do? And so the question around is to say, well, yeah,
If we took the meaning of life to only be money, your bank balance, your quarterly profit, et cetera, et cetera, which some people do, that's obviously very demeaning to what we can be as humanity, to what joy could be. And whichever way you do that, it was joy, spirituality, meaningfulness, you know, whatever kind of particular lens on this, this, this elevation of being human is, is,
operating within the technology of the capitalist system and the technology of the money system is a very good thing. And so it's important to say, OK, yes, problems and great. What might we do to make it better? Yeah, I think as with most things and with what I appreciated about Trevor, it's all about nuance. And when we talk about capitalism, the people who try to impede it or impede progress are
We would prefer to make the pie as big as possible,
and then worry about divvying it up later. And I think for me, the problem in the United States at least, and several places around the world, is we haven't fixed the divvying it up yet. We haven't created the right social safety nets. We haven't made sure that every child gets a great education or that everyone has healthcare. In the United States, you know, a country that should be rich enough to provide everyone that. So I definitely agree with you that people who are throwing stones at capitalism without providing solutions is just silly. And
Let's work together to figure out what are those tweaks, what are those tunes, like exactly what you're talking about to use our heart to, you know, fix it, to build toward the possible future that we want. So hopefully episodes of this pod are going to teach us something about how we can get to a better capitalism.
Possible is produced by Wonder Media Network, hosted by me, Reid Hoffman, and R.A. Finger. Our showrunner is Sean Young. Possible is produced by Edie Allard and Sarah Schleid. Jenny Kaplan is our executive producer and editor. Special thanks to Chelsea Williamson, Jill Fritzo, Stephen Fortelms, Jennifer Sandler, Madi Salehi, Surya Yalamanchili, Saida Satyava, Ian Ellis, Greg Beato, Ben Rellis, and the team at CityVox.