Sam Altman is out as CEO of OpenAI. A superstar CEO on one side, a disgruntled board on the other. 747 of 770 employees sent a scathing open letter to the board. Five days after he was unexpectedly fired, Sam Altman is back. Does this even count as a firing? This was a brutal... I guess I'm not really supposed to talk about this right now. ♪
This is What Now? with Trevor Noah.
Lululemon's new campaign features Odell Beckham Jr. and DK Metcalf in their buttery soft, breathable, restorative wear. Designed to keep up or kick back with you. Visit lululemon.com for everything you need to bring it tomorrow. Rest day is the best day. This episode is brought to you by Borgata Hotel, Casino and Spa in Atlantic City.
Your perfect getaway. Immerse yourself in the style and sophistication of Atlantic City's number one gaming resort, where elegance meets entertainment, and luxury awaits at every turn. If you're ready for an unforgettable experience, visit theburgata.com to book your stay today. Must be over 21 to gamble. Gambling problem? Call 1-800-GAMBLER. This episode is brought to you by Ricola.
I think we can all agree that having an irritated throat is one of the worst feelings. Thankfully, there's an easy solution. Ricola Cherry Drops. It provides the soothing throat relief you need, and even better, it's packed with flavor, so you can make every day more delicious and still feel great. Try Ricola Cherry Drops now. To find out where to buy it near you, visit Ricola.com. Hey, what's going on?
Nice to meet you. How you doing? Good. Absolute pleasure, man. Mine too. Thanks for taking the time. Thank you. At what I feel like is a crazy time, right?
Feels like the craziest time I have yet lived through. Yeah, I mean, you're at the center of it all. So I wonder what that feels like. Because I'm just an avid watcher of everything in this space and in this world. And I feel like you're somebody who's been affected by it all. I mean, just right now, we get the news, Sam Altman was on the shortlist for Time Magazine Person of the Year. Thrilled not to get that. Thrilled not to get it? Yeah, of course. Why? Why?
I have had more attention this year than I would have liked to have my entire life. And that is a big one. And I'm happy for Taylor Swift. Okay. Okay. So you don't like the attention? You don't want the attention? No, it's been brutal. I mean, it's like fun in some ways and it's like useful in some ways. But from like a personal life, like quality of life trade-off? Yeah. Yeah, definitely not. But, you know, this is it now. Like this is what I signed up for. Right. It's the infamy now.
Yeah. Do people recognize you in the streets? That's the kind of trade-off that's really bad. Yeah. I just feel like you never, I'm sure it happens to you too, but I never get to be anonymous anymore. Yeah, but people don't ask me about the future. They don't ask if you're going to destroy the world. Yeah, I believe that. Exactly. There's a slight difference. People might want a selfie from me. That's the extent of it. Yeah, a lot of selfies. Well, congratulations. You are Time Magazine's CEO of the Year. Yeah. That's probably one of the strangest moments, right? Because I guess Time Magazine is making this decision now.
A few weeks ago, you might not have been CEO of the year. I don't know if they would have still been able to give you the award. I guess it was for your work before. Yeah, I don't know. I don't know how it works. How does it feel to be back in CEO? I'm still like recompiling reality, to be honest. Yeah? I mean, it feels great in some sense because one of the things I learned through this whole thing is how much I love the company and the mission and the people. Right. And...
you know, I had a couple of experiences in that whole thing where I went through like all of the, like the full range of human emotion it felt like in short periods of time. Um, but a very clarifying moment for me was, uh, the like, so it all happened on like a Friday afternoon at Friday at noon. And then the next, the next morning, Saturday morning, a couple of the board members called me and said, you know, would you like to talk about coming back? And I had really complicated feelings about that. Um,
But it was very clarifying at the end of it to be like, yes, I do. Like, I really, I love this place and what we're doing. And I think this is like important to the world and like the thing I care most about.
It feels like in the world of tech, hiring and firing is something that everybody has to get used to. I mean, I know back in the day, you know, you were at the – it was the Y Combinator, right? And you were fired from that position. And everyone has a story in – wait, wait, what is – I don't want to debate that. No, no, no, no. Tell me, tell me. These are things – you know, you get the research and then you go from there. Oh, I mean, I had like –
like a year earlier that I wanted to just come do OpenAI. Right. And it was like a complicated transition to get here. But I, like, I had been working on both OpenAI and YC and like very much decided that I wanted to go do OpenAI. Okay. And I've never regretted that one. All right. So then you've never been fired and this is a tough place to be in as a person. Not,
Does this even count as a firing? Like if you get fired and then you need to be hired back. Oh, no, what I was going to say is not only... Like this was a brutal... I guess I'm not really supposed to talk about this right now. This was a very painful thing. Well, I... It felt to me personally just as a human like super unfair the way it was handled. Yeah, yeah. I can imagine. You know, a lot of people will talk about...
getting fired from their jobs. It became a trend, I guess, during COVID especially. People would talk about getting an email or a mass video that would go out and then thousands of employees would be let go. You seldom think it would be possible for that to happen to a CEO of a company. And then I think even more so, you don't think of it happening to a CEO who many people have talked
like the Steve Jobs of this generation and the future. And you don't say that about yourself, by the way. Certainly not. No, I think a lot of people say that about you, you know, because, I mean, I was thinking about this and I was going, I think calling you the Steve Jobs of this generation is unfair. In my opinion, I think you're the Prometheus of this generation. No, you really are. You really are. It seems like to me you have stolen fire from the gods, right?
And you are at the forefront of this movement and this time that we're now living through, where once AI was only the stuff of sci-fi and legend. You know, you are now the face at the forefront of what could change civilization forever. Do you think it'll change everything going forward? I do. I mean, I could totally be wrong about what I'm about to say. But my sense is we will build something that
Everybody, almost everybody agrees is AGI. The definition is hard, but we will build something that people will look at and say, all right, you all did that. That's artificial general intelligence. Yeah, yeah, yeah. Like, you know, a human level or beyond human level system. Before you go into the details on that, like, what would you say is the biggest difference between what people think AI is and what artificial general intelligence is?
We're getting close enough that the way people define it is important and there are differences in it. So for some people, they mean a system that can do some significant fraction of current human work. Of course, we'll find new jobs. We'll find new things to do. But for other people, they mean something more like a system that can help discover new scientific knowledge. And those are obviously very different milestones, have very different impact on the world. But
The reason I don't like the term anymore, even though I'm so stuck with it, I can't stop myself from using it. You don't like which term? AGI. Okay. All that I think it really means to most people now is like really smart AI. But it's become super fuzzy in what it is other than that. And I think largely just because we're getting closer. But the point I was going to try and make was we're going to make AGI, whatever you want to call that. And then at least in the short and medium term, it's going to change the world much less than people think.
Much less than people think. Yeah, in the short term. I think society is – it has a lot of inertia. The economy has a lot of inertia. The way people live their lives has a lot of inertia. Yeah. And this is probably healthy. This is probably good for us to manage this transition. But we all kind of do things in certain ways and we're used to it. And society is the superorganism, does things in a certain way and is kind of used to it. So –
And watching what happened with GPT-4 as an example, I think was instructive. People had this like real freak out moment when we first launched it. Yeah. I said, wow, I didn't think this was going to happen. Here it is. And then they went on with their lives. And it definitely changed things. People definitely use it. It's a better technology to have in the world than not. And of course, you know, GPT-4 is not very good and 5, 6, 7, whatever, going to be way, way better. But
Four was the moment in ChatGPT's interface, I think, was the moment where a lot of people went from not taking it seriously to taking it very seriously. And yet life goes on. Is that something you think is good for us as humanity and society? Is life supposed to just go on? I think. Or as one of the fathers of this product, one of the parents of this idea, do you wish that we all stopped and took a moment to, I guess, take stock of where we are?
I think the resilience of humans individually and humanity as a whole is fantastic. Okay. And I'm very happy that we have this ability to absorb and adapt to new technology, to changes and have it become just like, you know, part of the world. It really is wonderful. The...
I think COVID was a recent example where we watched this. Yeah. You know, like the world kind of just adapted pretty quickly and then it felt pretty normal pretty quickly. I mean, another example in a sort of non-serious way but instructive was when all of that UFO stuff came out. This was a couple of years ago now. Yeah. A lot of my friends would say things like, hmm,
Maybe those are real UFOs or real aliens or whatever. People who are real skeptics. And yet they just kind of like went to work and played with their kids the next day. Yeah, because I mean, what are you going to do? What are you going to do? What are you going to do? If they're flying by, they're flying by. What are you going to do? So do I wish that we had taken more of a time to take stock? We are doing that as a world. And I think that's great. I'm a huge believer that iterative deployment of these technologies are really important. We want...
We don't want to go build AGI off in secret in a lab, have no one know it's coming, and then drop it on the world all at once and have people have to say like, huh,
Here we are. You think we have to get used to it gradually and sort of grow with it as a technology? Yeah, and so this conversation now that society, that our leaders, our institutions are having, where people actually use the technology, have a feel for it, what it does, what it can't do, where the risks are, where the benefits are, I think that's awesome. And I think, like, maybe in some sense, the best thing we ever did for our mission was to adopt, so far, was to adopt the strategy of iterative deployment. Like, we could have built this in secret.
and then built it up for years longer and then just deployed it. And that would have been bad.
It's interesting. Today we walked into the OpenAI buildings. It's like a little bit of a fortress. It feels like the home of the future. I saw a post of yours. Did you come in as a guest today? Not anymore. I'm back now. I did one day during the middle of it. All right. I saw you had a post where you came in as a guest. I was like, damn, that's a weird one. It's like coming home, but then it's not home, but then it is home. It felt like it should have been somehow...
a very strange moment to like put on a guest badge here. Yeah. But I was like, everyone was like so tired, so exhausted on so much adrenaline. Yeah. It really did not feel momentous in the way that I guess I could say I had hoped it would. It should have been like a funny, you know, moment to like reflect on and tell stories that there were moments that day that were like that. Like one of my proudest moments of that day is I was,
very tired, very distracted. And, you know, we thought the board was going to put me back as CEO that day, but in case they didn't, I got interviewed for L3, which is like our lowest level software engineering job by one of our like best people. And, you know, he gave me a yes. That was like a very proud moment. Okay. Okay. So you still got the skills. But the badge was not as poignant as I would have hoped. Right.
I'd love to know what you think you've done right as CEO to have the level of support that we've publicly seen from the people who work at OpenAI. When the story broke, and I won't ask you for the details because I know you can't comment about the internal investigation stuff. I think I won't. Yeah, I mean that stuff. But what I mean is I know you can sort of speak about just the feelings and what's been happening in the company as a whole. It's rare that we'll see a situation like
unfold the way it did with OpenAI. You know, you have this company and this idea that for one minute doesn't exist for most people on the globe. The next minute, you release ChatGPT, this simple prompt, just a little chat box.
that changes the world. I think you go to 1 million users in the fastest time of any tech product. I think five days. Yeah, five days. And then it shoots to 100 million people. And it very quickly, I know on an anecdotal level, for me, it went from nobody in the world knew what this thing was. I was explaining it to people, trying to get them to understand it. I have to show them like poetry and simple things they would get. And then people are telling me about it. And now it just becomes this ubiquitous idea where people are trying to come to grips with what it is and what it means.
But on the other side, you have this company that's trying to in some way, shape or form harness and shape the future. And the people are behind you. You know, we see the story. Sam Altman is out.
No longer CEO. And then they're swirling everything. I mean, I don't know if you saw some of the rumors. They were crazy. One of the craziest things I saw was they said, like someone said, it was wild and funny. They said, I have it from good sources that Sam was fired for trying to have sex with the AI. That's what someone... I mean, I don't even know how I'm supposed to react to that.
I saw that and I was like... I guess given the moment, I should officially deny that, which did not happen. Yeah, and I don't think it could happen because I don't think people understand the combination of the two things. But what got me was how...
the salaciousness of the event seemed to bring OpenAI into a different spotlight and a different moment. And one of the big things was the support you had from your team. Like people coming out and saying, "We're with Sam no matter what happens." And that doesn't normally happen in companies. CEOs and its employees are generally in some way, shape or form disconnected. But it feels like this is more than just a team.
What I'm about to say is not false modesty at all. There's plenty of places I'd willingly take a lot of credit. I think this one, though, was not about me other than me as sort of like a figurehead representation. But I think one thing that we have done well here is a mission that people really believe in the importance of. And it was a... I think what happened there was like...
People realized that the mission and the organization and the team that we have all worked so hard on and made such progress to but have so much more to do. Like that was under real threat. And I think that was what got the reaction. It was really not about me personally, although hopefully people like me and think I do a good job. It was about the shared loyalty we all feel and the sense of duty to completing the mission.
and wanting to maximize our chances at the ability to do that. At the top level, what do you think the mission is? Is it to get to artificial general intelligence? Get the benefits of AGI distributed as broadly as possible and successfully confront all of the safety challenges along the way. Okay, that's an interesting second line. And I would love to chat to you about that later, getting into the safety of it all.
When you look at OpenAI as an organization, the very genesis of OpenAI was really strange. And you'll correct me at any point if I'm wrong, but it seems like it was started...
Very much with safety in mind, you know, where you brought this team of people together and you said, we want to start an organization, a company, a collective that is trying to create the most ethical AI possible that will benefit society. And you see that even in the, I guess, the profits, the way the company defines how its investors could receive the profit, etc.,
But even that changed at some point in open AI. Do you think you can withstand the forces of capitalism? I mean, there's so much money in this. Do you think that you can truly maintain a world where money doesn't define what you're trying to do and why you're trying to do it? It has to be some factor. Like just if you think about the costs of training these systems alone, we have to find some ways to...
play on the field of capitalism for lack of a better phrase okay but i don't think it will ever uh be our primary motivator the and by the way i like capitalism i think it has huge flaws but relative to any other system the world has tried i think it is still the best thing we've come up with but that doesn't mean we we shouldn't strive to do better um and i think we will find ways to uh
the enormous, like, record-setting amounts of capital that we will need to be able to continue to advance the forefront of this technology. That was, like, one of our learnings early on is just the stuff is way more expensive than we ever thought. Like, we knew...
We kind of knew we had this idea of scaling systems, but we just didn't know how far it was going to go. You've always been a big fan of scaling. That's something I've read about you. Even when one of your mentors, and I think one of the people you invest with now in Fusion Power, they said, whenever you bring an issue to SAM...
The first thing he thinks about is how can we fix this? How can we solve it? And the second thing he says immediately is how do we scale the solutions? I don't remember. I'm terrible with names. But I know it was somebody you work with. Interesting. No, it is right, but I haven't heard someone say that about me before. Oh, yeah, yeah. But it is – I think that it's been sort of like one of my life observations across many different –
like facets of companies and also just fields that scale often yield surprising results. So like scaling up these AI models led to very surprising results. Scaling up the fusion generator makes it much better in all of these obvious but some non-obvious ways too. Scaling up companies has non-obvious benefits. Scaling up groups of companies like Y Combinator has non-obvious benefits. And I think there's just something about this that is...
not taken seriously enough. And in our own case, you know, in the early days, we knew scale was going to be important. If we had been smarter or more courageous thinkers or whatever, we would have like swung bigger out of the gate. But it's like really hard to say, I want to go build a $10 billion bigger computer. So we didn't. Right. And we learned it more slowly than we should have, but we did.
But now we see how much scale we're going to need. And again, I think capitalism is cool. I have nothing against it as a system. Well, no, that's not true. I have a lot of things against it as a system. But I have no pushback that it's better than anything else we have yet discovered. Have you asked the chat GPT if it could design a system? I have not.
maybe not to design a new system, but like, you know, I've asked a lot of questions about like how AI and capitalism are going to intersect and what that means. One of the things that we, so we were right about the most important of our initial assumptions that AI was going to happen. It was possible that deep learning was. Which a lot of people laughed at, by the way. Totally. Oh man, we got ruthlessly laughed at. Yeah. But even some of our thoughts about how to get there, we were right about it. But we were wrong about
a lot of the details, which of course happens with science and that's fine. You know, we had a very different approach for how we thought we were going to build this before the language model stuff started to work. We also, I think, had a very different conception of what it was going to be like to build an AGI. And we didn't understand this idea that it was going to be like iterative tools that got better and better that you kind of just talk to like a human. And so, and so our thinking was very confused about
Well, when you build an AGI, what happens? And we sort of thought about it as there was this moment before AGI and then this moment of AGI. And, you know, then you needed to like give that over to some other system and some other governance. I now think it can be, and I'm really happy about this because I think it's much easier to navigate. I think it can be a...
I don't want to say like just another tool because it's different in all these ways, but in some other sense, we have made a new tool for humanity. We've added something to the tool chest. People are going to use that to do all sorts of incredible things. But
People remain the architects of the future, not one AGI in the sky. It's you can do things you couldn't do before. I can do things I couldn't do before. We'll be able to do a lot more. And in that sense, I can imagine a world where part of the way we fulfill our mission is we just make really great tools that massively impact human ability and everything else. And I'm pretty excited about that. Like, I love that we offer free chat GPT with no ads.
Because I personally really think ads have been a problem for the internet. But we just like put this tool out there. That's the downside of capitalism, right? Yeah, yeah. One of them. I think there's much bigger ones personally. But we put this tool out there and people get to use it for free. And we're not like trying to turn them into the product. We're not trying to make them use it more. And I think that shows like an interesting path that we can do more on.
So let's do this. In our time together in this conversation, there's so many things I would like us to get to, hopefully. We won't be able to answer all questions, obviously, but there are a few ideas, a few headings and a few spaces I wanted us to live within. I guess the first and most timely is...
What happens now for the future of the company? Where do you see it going? You know, one of the things I found particularly interesting was what the new board was, how the new board was comprised, you know, for OpenAI. You know, where previously you had like women on the board, now you don't. You know, where previously you had people who had like no financial incentive on the board, now you do. And I wonder if...
If you worry that that guardrail that you were part of implementing is now gone, you know, do you have a board that's now not focused on protecting people or, you know, defining a safer future as opposed to making money and getting this thing to be as good or as big as it can be? Well, I think our current, our previous governance structure and board
didn't work in some important sense. So I'm all for figuring out how to improve that and I'll support the board in their work to do that. Obviously the board needs to grow and diversify and that'll be something that I think happens quickly. And voices of people who are going to advocate for people who are traditionally not advocated for and be really thoughtful about not only AI safety, but just the
the lessons we can take from the past about how to make these very complex systems that interact with society in all of these ways, um, as good as possible, which is both mitigating the bad and sharing the upside, um, that all needs to be represented. Uh, so I'm, I'm excited to have a second chance at getting all these things right. And we clearly got them wrong before. Um, but, but yeah, like,
diversifying the board, making sure that we represent all of the major classes of stakeholders that need to be represented here, figuring out how we make this a more democratic thing, continuing to push for governments to make some decisions governing this technology, which I know is imperfect, but I think better than any other technology.
method of doing this that we can think of so far engaging with our user base more to like help let them help set the limits on how this works that's all super important um that'll be one major thing going forward is is board and expanding the board and governance yeah again really like i know our current board is small but i think they're so committed to all the things you were just talking about then there's another big class of problems uh
If you asked me a week ago, I would have said stabilizing the company was my top thing. But internally, at least, I feel pretty good. We did not lose a single customer. We did not lose a single employee. We continued to grow. That's pretty amazing. We continued to ship new products. We...
our key partnerships feel strengthened, not hampered by this. And things are on pace there. And the sort of research and product plan for the first half of next year, I think feels better and more focused than ever. But there's a lot of, clearly there's like a lot of external stabilization we still have to do. And then beyond that, like we're really confronting the possibility that we just like,
we have not been planning ambitiously enough for success. You know, we've had like ChatGPT Plus. If you want to subscribe to ChatGPT Plus right now, you are not able to. You've not been able to. We just ran out of too many people. Yeah. And so given how good we think the future systems we create are going to be and how much people seem to want to use these, we have been like behind the airplane all year long. We'd like to finally get caught
Get 3% daily cash back when you use Apple Card to buy a new iPhone 15 or pair of AirPods at Apple. You can choose to grow it at 4.40% annual percentage yield when you open a savings account through Apple Card. Apply for Apple Card in the Wallet app on iPhone.
This episode is brought to you by the podcast Tools and Weapons with Brad Smith. You know, one of my favorite subjects to discuss is technology. Because when you think about it, there are a few things in the world that can improve or destroy the world, like the technologies that humans create. The question is, how do we improve the world?
The question is, how do we find the balance? Well, one of my favorite podcasts that aims to find the answers to these questions is hosted by my good friend, Brad Smith, the vice chair and president of Microsoft. From AI to cybersecurity and even sustainability, every episode takes a fascinating look at the best ways we can use technology to shape the world. Follow and listen to Tools and Weapons with Brad Smith on Spotify now.
I found myself constantly thinking about you as a person, you know, when the whole board saga was taking place. And whenever there's a storm, I'm always interested in what's happening in the eye of the storm. Yeah. You know, and I wondered, like, where were you when this all broke? Like, what were you doing? What was going on in your world on a personal level? Yeah.
The reason I laughed is the thing people say about me is I'm like, I am good at sitting in the eye of the hurricane while it like turns around me and staying super calm. And this time like turns out like not. This was the experience of like being in the eye of the storm and having it not be calm. I was in Las Vegas at F1. Oh, okay. Yeah. You an F1 fan? I am, yeah. All right. Who's your team? Do you have one? Honestly, I like...
I mean Verstappen is so good it's hard to say but I feel like that's the answer everyone would say I still think he's just unbelievably it depends on when they joined the sport like I was a Schumacher fan because that's when I started watching well I mean Nigel Mansell then like Etten Senna you know what I mean but yeah okay no Verstappen he's precise with it I see why
And just like, it almost gets bored of like, boring watching him win so often, but it's incredible. So I was like so excited for that. That first night, I got in like late on a Thursday. That first night, someone like, they forgot to weld down a manhole cover. So someone drove over it in the first lap, blew up like one of Ferrari's engines and stopped the practice. So,
I didn't get to watch it. I never got to watch any race that whole weekend. I was in my hotel room, took this call. I had no idea what it was going to be and got fired by the board. And it was just this, like, it felt like a dream. I was like, I was confused. It was chaotic. It did not feel real. It was like, like obviously like upset and painful, but confusion was just like the dominant emotion at that point. It was like,
It was just in a fog, in a haze. I was like... I didn't understand what was happening. It happened in this, like, unprecedentedly, in my opinion, crazy way. And then in the next, like...
Half hour, my phone, I got so many messages that iMessage like broke on my phone. Wow. And... Who is this from? Employee? Everyone. Everyone. It was like, it was, my phone was just like unusable because it was just like notifications nonstop. And iMessage like hit this thing where it stopped working for a while. That message got delivered late. Then it marked everything as red. So I couldn't even like tell...
Like, you know, it was just chaotic. And I was like talking to the team here, trying to figure out what was going on. Like Microsoft is calling everybody else. And it was just like, it really was like unsettling and didn't feel real. And then I kind of like got a little bit collected and was like, you know what? I can go on and I really want to go work on AGI somehow. If I can't do that at opening, I'm still going to do it.
And I was thinking about the best way to do that. Greg quit. Some other people quit. Started just getting like tons of messages from people saying like, we want to come work with you however it's going to be. And at that point, going back to OpenAI was not on my mind at all. Yeah, I can imagine. I was just like thinking about whatever the future was going to be. But I kind of didn't have a sense of like what a...
industry event this was because I like wasn't really reading the news all I could tell was I was getting like crazy numbers of messages right because you're actually in the storm yeah and I was just trying to like you know be supportive of OpenAI figure out what I want to do next try to understand what was happening and then flew back to California met with some people and and kind of was just like very focused on going forward at that point but you know also like wishing the best for OpenAI and
But, and then I stayed up like most of that first night, couldn't really sleep. Also, it was just like tons of conversations happening. And then it was sort of like a crazy weekend from there. But I'm sure I still have not, like, I'm still like a little bit in shock and a little bit just trying to like pick up the pieces. You know, I'm sure as I have time to like sit and process this, I'll like have a lot more feelings about it. Right. Yeah.
Do you feel like you just had to jump straight back into everything? Because to your point, you're on this mission. You can see in your eyes you're very driven. And the world has now tipped over a precipice that it can never return from. So you're moving towards something. All of a sudden, it doesn't seem like you'll be able to achieve it in the sphere that you're in. But as you say, Microsoft steps in. Satya Nadella says, hey, come and work with us. We'll rebuild this team.
If there's one thing people say about Sam Altman, if they've worked with him, is they say he is tenacious. He is he's unrelenting. He does not believe in in letting life stop you if you if you have a goal and if you believe in something. And it seems like like you are moving towards that. You said nothing publicly about open AI. You weren't disparaging in any way. But it feels like it took a toll on you. For sure.
I mean, I don't think it's anything I won't, like, bounce back from, but I think it'd be impossible to go through this and not have it take a toll on you. That'd be really strange. Did it feel like you were losing a piece of yourself? Yeah. I mean, like, this... We started OpenAI, like, very end of 2015. Like, first day of work was really in 2016. And then I've been, like... I was working on this on YC for a while, but I've been, like, full-time on this since the beginning, since early 2019. And it has, like...
AGI and my family are like the two main things I care about. So losing one of those is like, and again, like maybe in some sense I should say like, oh, you know, I was like, I got to work on AGI and care really more about the mission. But, but of course I also care about like this org, these people, our users, our shareholders, everything we built up here. So, so yeah, I mean, it was just like unbelievably painful. The, the,
The only comparable set of life experience I had, and that one was, of course, much worse, was when my dad died. Wow. And that was like a very sudden thing. But the sense of like confusion and loss...
And, you know, you get like, in that case, I felt like I had like a little bit of time to just really like feel it all. But then there was so much to do. Like it was like so unexpected that I just ended up having to pick up the pieces of his life for a little while. And it wasn't until like a week after that, that I really got a moment to just like catch my breath and be like, holy shit, like I can't believe this happened. So, yeah, that was much worse. But it was there's like echoes of that same thing here. I can only I can only imagine that.
When you look towards the future of the company and your role in it, how do you now find a balance between moving OpenAI forward
continuously propelling yourselves in the direction you believe, but then also, you know, do you have, do you still have an emergency break? Is there some system within the company where you say, if we feel like we're creating something that's going to adversely affect society, we will step in, we will stop this. Do you have that ability and is it baked in? Yeah, of course. And we've had it in the past. Like we've created systems that we've chosen not to deploy. Oh, interesting. And I'm sure we will again in the future.
Or we've created a system and just said, hey, we need much longer to make this safe before we can deploy it. Like with GPT-4, it took us almost eight months after we finished training before we were ready to release it to do all of the alignment and safety testing that we wanted. I remember talking to some of the team about that. And that's not a board decision. That's just the people in here doing their jobs and being committed to the mission. So that will continue on. And one of the things I'm really proud of
about this team is the ability to operate well in chaos, crisis, uncertainty, stress. I give them like an A plus on that. They did such a good job. And as we get closer to more powerful, very powerful systems, I think that ability of the culture and the team we have built is...
the most important element, you know, to like keep your head cool in a crisis and make good, thoughtful decisions. I think the team here really proved that they can do that. And that's super important. There were, I saw this thing where someone was like, you know, the thing we learned
about OpenAI is that Sam can run the company without any job there. And I think that's totally wrong. I think that's not at all what happened. I think what happened is, the right learning, is the company can totally run without me. - It's a culture that's-- - The team is ready, the culture is ready. I think that's, I'm just super proud of that. Really happy to be back and doing it. But I sleep better at night having watched the team
through this given the challenges ahead. And there will be bigger challenges than this that will come up. But I think in some subjective sense, I hope and believe this is the hardest one because we were so unprepared. And now we kind of like realize the stakes and that we're not just in some like important sense, we're just not a regular company. Oh, yeah. Far from it. Far from it. Let's talk a little bit about that. Yeah.
ChatGPT, OpenAI, you know, whatever it may end up calling it. Because, I mean, you've got DALI, you've got Whisper, you've got all these amazing products. Do you have any name ideas, brand architecture ideas for us? I would love it. I feel like ChatGPT has done it. I feel like it is now ubiquitous. Yeah, it's a horrible name, but it may be too ubiquitous to ever change. You can't change it. You think you can change it at this point? I mean, could we drop it down to just like GPT or just chat? I don't know. I don't know. Maybe not. Sometimes I feel like a product or a name or an idea...
grows beyond the marketer's dream space and then people just have it. Yeah, no marketer ever would have picked ChatGPT as the name for this, but we may be stuck with it and that might be all right. Yeah. And it's now, I mean, just the multimodal aspects of it like fascinate me. You know, I remember when I first saw Dali come out.
You know, and it was just an idea and seeing how it worked and seeing this program that could create a picture from nothing but noise. And I was trying to explain it to people and they were going, but where did it get the picture from? I was like, there was no picture. There was no source image. And they're like, that's not possible. It saw something. And I was like, and it's so hard to explain some of this. Sometimes it's even hard to understand for myself. But when we look at this world that we're currently living in, you know, we talk about them as numbers.
GPT 3.5, GPT 4, GPT 5, 6, 7, whatever it may be. I like to remove the technical term in that way and talk more about the actual use cases of the products. One thing we saw in a jump or one thing we saw between products was between chat GPT 3, 3.5 to 4, we saw what we would call reasoning changes.
On a much higher level. A little bit of it, yeah. Like creativity in some way. The first sparks of it. Yes, yes, yes, exactly. And when I look at this product and this world that you're creating now, you know, with general large language models and now the specialized large language models, I wonder, do you think...
that the use case is going to change dramatically? Do you think that what might right now just be like a little chatbot that people are, like, do you think this will be the way the product remains? Or do you think it will become a world where everything becomes the specialized GPTs? You know, a world where, you know, Trevor has his GPT that's trying to do things for him, or this company has their GPT that's doing things for them. Like, where do you see it? Obviously, it's hard to predict the future, but where do you see it going from where we are right now?
I think it'll be a mix of those. It is hard to predict the future. Probably I'll be wrong here, but I'll try anyway. I think it'll be a mix of the two things that you just said. One, the base model is going to get so good that I have a hard time with conviction saying, here's what it won't be able to do. That's going to take a long time, but I think that's where we're heading. What's a long time on your horizon? Like what's when you measure it? Like not in the next few years. Okay.
It will get much better every year in the next few years. But like I'm sure I was going to say I'm certain. I think it's like highly likely there will still be plenty of things that the model in 2026 can't do. But doesn't the model always surprise you? You know, when I talk to engineers who work in this space, when I talk to anyone who's involved in AI or adjacent to AI, the number one thing people say, the number one word is surprised. Yeah.
People keep saying that. They go, we were surprised that we were teaching or we thought chat GPT was learning about this field. And all of a sudden it started speaking a language. Or we thought we were teaching it about this and all of a sudden it knew how to build bridges or something. For what it's worth, that was most people's here subjective experience of maybe between like –
2019 and 2022 or something like that. Okay. But now I think we have learned not to be surprised. And now we trust the exponential, most of us. So GPT-5 or whatever we call it will be great in a bunch of ways. We will be surprised about specific things it can do and that it can't do, but no one will be surprised that it's awesome. Huh? Like at this point, I think we've really internalized that in a deep way. Um,
The second thing you touched on, though, is these custom GPTs. And more importantly than that, you also touched on the personal GPTs, like the Trevor GPT. And that, I think, is going to be a giant thing of the next couple of years, where if you want, these models will get to know you, access your personal data, answer things in the way you want, work really effectively in your context. And I think a lot of people are going to want that. Yeah, I mean, I can see a lot of people wanting that. It almost made me wonder if...
the new workforce becomes one where your GPT is almost your resume. Your GPT is almost more valuable than you are in a strange way. Do you know what I mean? It's like a combination of everything you think and everything you've thought and the way you synthesize ideas combined with your own personal GPT becomes... And I mean, this is me just like thinking of a crazy future where you go...
You literally get to a job and they go, what's your GPT? And you say, well, here's mine. You know, we always think of these like agencies, these personalized agencies. I'm going to like have this thing go do things for me. But it'd be interesting with what you're saying is if what it instead, this is like how other people interact with you. Right. Like this is your impression, your avatar, your echo, whatever. I can see it getting to that. Because I mean, what are we, if not a culmination or combination of all of our... It's a strange thought, but I could believe it.
I'm constantly fascinated by where it could go and what it could do. You know why? When ChatGPT first blew up, right, in those first few weeks, I will never forget how people quickly realized that the robot revolution, I know it's not robots, but just, you know, for people, they're like, oh, the robot revolution, the machine revolution.
wasn't replacing the jobs that they thought it would. You know, people thought it would replace... Truck drivers. Yeah, truck drivers, etc. And yet, we've come to find that, no, those jobs are actually harder to replace. Yeah.
And it's, in fact, all the jobs that have been, quote, unquote, like thinky jobs. You know, it's like your white collar. Oh, you're a lawyer? Oh, they might not need as many lawyers when you have, you know, chat GPT-5, 6, 7, whatever you want. You know, you're an engineer. You are, like, where do you... The human body is really an amazing thing. It really is, right? Yeah. It really is.
Do you see any advancements where you think it could replace the human body? Or are we still in like mind land? No, I think we will get robots to work eventually, like humanoid-like robots to work eventually. But our, you know, and we worked on that. In the early days of OpenAI, we had a robotics program. Oh, I didn't know that. We did. We made this thing that could do like a robotic hand that could do a Rubik's Cube with one hand. It takes a lot of dexterity. I think there's like a bunch of different insights involved.
rolled into that. But one is that it's just much easier to make progress in the world of bits than the world of atoms. Like the robot was hard for all the wrong reasons. It wasn't hard because it was like helping us advance hard research problems. It was hard because the robot kept breaking and that it wasn't that accurate and the simulator was bad. And whereas like a language model, it's just like, you can do all that virtually. You can make way faster progress. So it like...
Focusing on the cognitive stuff helped us push on more productive problems faster. But also, in a very important way, I think solving the cognitive tasks is the more important problem. Like, if you make a robot, it can't necessarily figure out how to, like, go help you make a system to do the cognitive tasks. Yeah. But if you make a system that does the cognitive tasks, it can help you figure out how to make a better robot. Oh, yeah. That makes sense. And so I think, like, cognition...
was the core of the thing that we wanted to thrust at. And I think that was the right decision. But I hope we'll get back to robots. Do you have an idea of when you will consider artificial general intelligence achieved? Like, how do we know? Me personally? I do not feel like mission accomplished. Because everyone talks about artificial general intelligence. But then I go, how do we know what that is?
So this comes back to that point earlier where everyone's got a different definition. I'll tell you personally when I'll be thrilled. When we have a system that can help discover novel physics, I'll be very thrilled. But that feels like it's way beyond general intelligence. That seems like you... Do you know what I mean? It's beyond, I think, what most people would count as that. Like maybe... Because this is what I think of sometimes as I go...
How do we define that general intelligence? Are we defining it as brilliance in a certain field? Or are we defining it as... Like a child is artificially generally intelligent. That's for sure. But you have to keep programming it. They come out. They don't speak. They don't know how to walk. They don't know how to... And you're constantly programming this AGI to get to where it needs to go. So...
How will you, like, if you get to the point where you have a four-year-old child version of AGI. If we have a system that can just, like, just figure it out, you know, can just go autonomously with some help from its parents. Yeah. Figure out the world in the way that a four-year-old kid does. Yeah, we can call that an AGI. If we can really address that
truly generalized ability to be confronted with a new problem and just figure it out. Not perfectly. A four-year-old doesn't always figure it out perfectly either. But, you know, then we're clearly going to get it. Are we able to get there if we don't fundamentally understand...
and the mind. It seems like it. You think we can get there? I think so. Or can we get to a place where... So I'm sure you know about this. One of my favorite stories in the world of AI is... I think it was actually a project that Microsoft was working on. But they had this...
that was trying to learn how to discern between male and female faces, right? And it was pretty accurate at some point. It was like at 99.9% accuracy. However, it kept failing with black people and black women in particular. It
It kept on mischaracterizing them as men. And the researchers kept working and they were like, what is it? What is it? What is it? What is it? What is it? What is it? At some point, and this is, I mean, I tell the story this way. I mean, it could be a little bit wrong, but I found it funny. At some point, they sent, quote unquote, they sent the AI to, I think it was Kenya. So they sent the AI to Africa. And then they told the research team in Kenya, can you work with this for a while and try and figure it out?
And then while the AI was running on that side of the world with their data sets and African faces, it became more and more and more accurate with specifically black women. But in the end of it, they found that the AI never knew the difference between a male face and a female face. All it had been drawing was a correlation between makeup.
And so the AI was going, people who have red lips and who have like rosy cheeks and maybe blue on their eyelids, those are women. And then the other ones are men. And because the researchers said, yes, you're correct. Yes, you're correct. It just found like a quote unquote cheat code, you know, and you know how this works way beyond like what I understand. But it just figured out a cheat code. It's like, oh, I understand what you think a man is and what a woman is. And it gave it to them. And then they realized,
because black women are generally underserved when it comes to makeup and they don't wear makeup, you know, the system just didn't know. But we didn't know that it didn't know. And so I wonder, how will we know that the AGI doesn't know or does know something?
Or will we know that it's just cheating to get there? Like, how do we know? And what is the cost of us not knowing when it's intertwined with so many aspects of our lives? One of the things that I believe we will make progress on is the ability to understand what these systems are doing. So right now, interpretability is like...
made some progress that that's the field of like looking at one of these models and there's different ways you can there's different levels you can do this out you can try to understand what every artificial neuron in a system is doing or you can like look at as the system is thinking step by step you know which of these do i not agree with okay and there will be even more we'll discover but but the ability to understand what these systems are doing uh hopefully have them explained to us
why they're coming to certain conclusions and do it accurately and robustly. I think we're going to make progress there before I think we're going to truly understand how these systems are capable of doing what they do and also how our own brains are capable of doing what they do. So I think we will eventually get to understand that. I'm so curious. I'm sure you are too. I am. But it seems to me that we'll have more progress in
doing what we know works to make these systems better and better and having them help us with the interpretability challenges. And also, I think as these systems get smarter, they will just be fooled less often. So a more sophisticated system might not have made that makeup distinction. It may have learned at a deeper level. And I think we see evidence of stuff like that happening. You know, there's two things you actually make me think of when you say not get fooled that easily. One is the safety side. One is the accuracy side.
We, one of the first things, and I mean, the press ate this up. You remember, they were like, oh, the AI hallucinates and it thinks that it is going to kill me. And it thinks, and people love using the word think, by the way, with large language models, which I find particularly funny. Because I always think like journalists, you know, they should be trying to understand what it's doing before they report it.
But they've done, I think, the general public a disservice in using the word think quite a lot. I mean, I have empathy for it. Like, we need to use familiar terms and we need to anthropomorphize. But I agree with you that it is a disservice. Yeah, because if you're saying it thinks, then people go, well, will it think about killing me? And it's like, no, but it's not thinking. You know, it's really just using this magical transformer to.
figure out where words most likely fit in relation to each other. What do you think you're doing? What am I doing? Yeah, that's an interesting one. That's why... And now, this is what I was going to, right? Is the ideas that we put together. We talk about hallucinating. Let me start with the first part. Do you think we can get to a place where AI doesn't hallucinate? Well...
I think the better version of that question is can we get to an AI that doesn't hallucinate when we don't want it to at a similar rate to humans not doing it? And on that one, I would say yes. But actually, like, a big part of why people like these systems is that they do novel things. And if it only ever happens.
Yeah, like hallucination is this sort of feature and bug. Well, that's what I was about to ask. Isn't hallucinating part of being an intelligent being? Totally. If you think about the way... Like if we think about the way an AI researcher does work. Okay. They look at a bunch of data. They come up with ideas. They read a bunch of stuff. And then they start thinking, well, maybe this or this. Maybe I should try this experiment. And now I got this data back. So that like didn't quite work. Now I'll come up with this new idea. But this human ability to...
come up with new hypotheses, new explanations that have never existed before and most of which are wrong, but then have a process and a feedback loop to go figure out which ones might make sense and then do make sense. That's like a key element of human progress. Yeah.
This episode is brought to you by KPMG. The people at KPMG make the difference for their clients. Talented teams leveraging the right technology to uncover insights that illuminate opportunity. KPMG teams together with their clients working shoulder to shoulder to help grow and transform their enterprise. Are you ready to make the difference together? Go to
Visit .kpmg.us slash transformation to learn more. This episode is brought to you by Borgata Hotel, Casino, and Spa in Atlantic City. Your perfect getaway. Immerse yourself in the style and sophistication of Atlantic City's number one gaming resort, where elegance meets entertainment and luxury awaits at every turn.
If you're ready for an unforgettable experience, visit thebogata.com to book your stay today. Must be over 21 to gamble. Gambling problem? Call 1-800-GAMBLER. How do we prevent the AI from, you know, that garbage in, garbage out output, like that scenario? How do we...
Right now, the AI is working off of information that humans have created in some way, shape or form. It is learning from what we've considered learnable material. With everything that's popped up now, you know, the open AIs, the anthropics, the lambdas, the, you know, you name them. It feels like we could get to a world where now AI is pumping out more information than humans are pumping out and it may not be vetted as much as it should be. How do we then, is the AI going to get better when it is learning from itself information?
in a way that it might not be vetted slash, like, do you get what I'm saying? Totally. How do we figure that out? So it comes back to this issue of knowing how to behave in different contexts. Like you want hallucinations in the creative process, but you don't want hallucinations when you're trying to like report accurate facts on a situation. And right now you have these systems that can like generate these beautiful new images that are hallucinations in some important sense, but good ones.
But then you have a system that when you want it to be only factual, again, it's gotten much better, but it's still a long way to go on there. And it's fine. I think it's good if these systems are being trained on their own generated data, as long as there is a...
a process where the systems are learning what data is good and what is bad, which, again, it's not enough to say hallucinated or not, because if it's coming up with new scientific ideas, those may start off as hallucinations. Which is valuable. But, you know, what is good, what is bad, and then also that there is enough human oversight of that process that we are all still collectively in control of where these things are going. But with those constraints, I think it's great that the systems are going to be
Future systems are going to be trained on generated data. And then you reminded me of something else, which is I've been wondering, I don't know quite how to calculate this, but I would like to know when there's more words generated by, say, GPT-5 or 6 or whatever than all of humanity at a given time. If that feels like an important milestone, actually now that I'm saying that out loud, maybe it doesn't. Generated in what way? Oh, like where the model is producing more words than all of humanity in a given year.
So there's, you know, 8 billion humans or whatever. Yeah, that does seem interesting. Speak how many words per year on average. You can figure out what that is.
I mean, yeah. What does it give us, though, is the question on the other side. That's why I was taking it back after I said it. For some reason, it feels like an important milestone to me, but I can't think of... It feels like an important milestone in like a monkey typewriter kind of way because maybe humans are, you know, we're all monkey typewriting the whole time and that's where things... I think it's worthwhile. Yeah, the amount of... I don't want to use the word thinking because I think you're right not to use it, but the amount of like...
Maybe we can just the amount of like words generated by AI versus all of humanity. Yeah. I'm going to lose you soon. So I want to jump into a few questions that I think are really people will kill me if I don't ask these of you. Okay. So one of the main ones, this is from my side personally.
We always talk about AI learning from the data, right? They're fed data sets. And we talk about this. That's why you need these mega computers that cost billions and billions of dollars so that the computers can learn. How do we...
teach an AI to think better than the humans that have given it the data that is clearly flawed? So for instance, how does an AI learn beyond the limited data that we've put out there? You know, when it comes to race, when it comes to economics, when it comes to the ideas, because we're limited, how do we teach it to not be as limited as we are if we're feeding it data that's limited? We don't know yet, but that's our big, that's like one of our biggest research thrusts in front of us is like, how do we surpass human data? Mm-hmm.
And I hope that if we can do this again a year from now, I'll be able to tell you. But I don't know yet. We don't know yet. It's really important. However, a thing that I do believe is this is going to be a force to combat injustice in the world in a super important way. I think these systems will be
they won't have the same deep flaws that all humans do. They will be able to be made to be far less racist, far less sexist, far less biased. They'll be a force for economic justice in the world. I think, you know, if you make, if you make a great AI tutor or a great AI medical advisor available, that helps the poorest half of the world more than the richest half, even though it helps lift everybody up. So,
I don't have like an answer to the scientific question you asked, but I do at this point feel confident that these systems can be. Of course, we have to do some hard societal work to make them in fact be, but can be great for sort of increasing justice in the world. Okay. Maybe that leads then perfectly to a second question, which is,
What are you doing? What is OpenAid doing? Are you even considering doing anything to try and mitigate how much this new technology once again creates the haves and the have-nots? Every new technology that's come out has been amazing for society as a whole, if we call it that. But you can't deny it creates a moment in time where if you have it, you've got it all. And if you don't, you're out of the game.
I think that we'll learn a lot more as we go, but currently I think one really important thing we do is offer truly free service, which means no ad supported, but just a free service to more than 100 million people who are using it every week.
And the fact that it's not fair to say anyone, because in some countries we have still blocked or we are still blocked, but trying to get closer and closer to everything we can do there where anyone can access really high quality, easy to use, free AI. That is important to all of us personally. And I think that there's other things that we'd like to do with the technology, like if we can help
cure diseases with AI and make those cures available to the world, that's clearly beneficial. But putting this tool in the hands of as many people as we can and letting them use it to architect the future, that is super important. And I think we can push this much, much further. Okay.
Two more questions. Can I add one more thing to that? Oh, yeah. Yeah, go. It's all your time. Go ahead. Yeah. The other thing that I think is important to that is who gets to make the decisions about what these systems say and not say or do and not do? Like, who gets to set the limits? Yeah. And I...
Like right now, it is basically the people who work at OpenAI deciding and no one would say that's like a fair representation of the world. So figuring out not just how we spread access of this technology, but how we democratize the governance of it, that's like a huge challenge for us in the coming year. Well, that sort of goes to what I was about to ask you. The safety side of it all. You know, we spoke about this right in the beginning of the conversation.
When designing something that can change the world, there always has to be an acknowledgement of the fact that it can change the world in the worst way or for the worst. You know, with each leap of technology, there's been an outsized ability for one person to do more damage. Is it possible, the first part, to make AI completely safe? And then the second part of it is,
What is your nightmare scenario? What is the thing that you think of that would make you press a red button that shuts open AI and all AI down? When you go, you know what, if this can happen, we have to shut it all down. What are you afraid of? And so the first one is, can you make it safe? And the second part is, what is your nightmare scenario? The way I think about... So first of all, I think the insight that you started with, which is...
the number of people that can cause catastrophic harm goes down every decade or roughly every decade. That seems to me to be like a deeply true thing that we as a society have to confront. Second, about making a system safe, I don't think of it as like quite a binary thing. Like we say airplanes are safe, but airplanes do still crash very infrequently, like amazingly infrequently to me. We say that drugs are safe, but people don't.
The FDA will still certify a drug that can cause some people to die sometimes. And so safety is not like... It's like society deciding something is acceptably safe given the risk-reward trade-offs. And that I think we can get to. But it doesn't mean things aren't going to go really wrong. I think things will go really wrong with AI. What we have to prevent and...
And I think society has like actually a fairly good, messy, but good process for collectively determining what safety thresholds should be. Like that is a complex negotiation with a lot of stakeholders that we as a society have gotten better and better at over time. But we have to prevent, and I think what you were touching on there is that the kind of catastrophic risks. So nuclear is the example everyone gives, you know, nuclear war had this very global impact, right?
And so the world treated it differently and has done what I think is a remarkable job the last almost 80 years. And I think there will be things with AI that are like that. Certainly one example people talk about a lot is AI being used to design and manufacture synthetic pathogens that can cause a huge problem. Another thing people talk a lot about is...
security issues and AI that can just like go hack beyond what any human could do and certainly at any scale. And then there's another category of things that I think are just new, which is if the model gets capable enough that it can help design the own way to like exfiltrate the weights off of a server and make a lot of copies and modify its behavior. More of like the sci-fi scenario. But I think we do as a world need to
stare that in the face maybe not that specific case but this idea that there is catastrophic or potentially even existential risk in a way that just because we can't precisely define it doesn't mean we get to ignore it either and so we're doing a lot of work here to try to forecast and measure what those issues might be when they might come how we would detect them early um
And I think all the people who say you shouldn't talk about this at all, you should just talk about the issues of misinformation and bias and the issues of today, they are wrong. We have to talk about both. We have to be safe at every step of the way. Okay. That's terrifying, as I thought it would be. So then I go to – by the way, are you actually thinking about running for governor? Was that a real thing? No, no, no. I thought about it very briefly in like a –
2017? Okay. Or 16 even, something like that? I thought so. That seemed like a, you know. So like a couple of weeks kind of like vague entertainment of an idea. Okay, okay. I guess my final question for you then is, you know, the what now of it all. What is your dream? If Sam Altman could wave a magic wand and have AI be exactly what you hope it will be,
What will it do for the future? What are all the good sides? What are all the upsides for everybody out there? And I mean on every level. This is like a nice positive thing. Thank you for asking this. I think you should always end on the positives. Look, I think we are heading into the greatest period of abundance that humanity has ever seen. And –
I think the two main drivers of that are AI and energy, but there are going to be others too. But those two things, the ability to like come up with any idea, the ability to make it happen and do this at mass scale where the limits to what people can have are going to be sort of like what they can imagine and what we can collectively negotiate as a society. I think this is going to be amazing. We were talking earlier, like what does it mean if every student gets a better educational experience than everybody?
the richest student with the best access can get today. What does it mean if we all have better healthcare than the richest person with the best access can get today? What does it mean if people are, generally speaking, freed up to work on whatever they find most personally fulfilling, even if it means they have to be new kinds of job categories? What does it mean if everybody can
you know presumably you and i both like really love our jobs yeah but i don't think that's true for everybody yeah clearly what does it mean if everybody gets to have a job that they love um and that they have like the resources of a large company or a large team at their disposal so you know maybe instead of the 800 people at openai everybody gets 800 even smarter ai systems that can do all these things and people just get to create and make all these like
I think this is remarkable. And I think this is a world that we are heading to. And it'll require a lot of work in addition to the technology to make it happen. Like society is going to have to make some changes. But the fact that we are heading into this age of abundance, I'm very happy about. I'll leave you with this from my side.
I'm a huge fan, huge, huge fan of the potential upsides of AI. You know, like I work in education in South Africa. My dream has always been to have every kid have access to the best possible tutor possible. You know what I mean? Literally no child left behind because they can learn at their pace.
By the way, what's happening with children who are using ChatGPT to learn things? The stories, like I get emails every day. It's phenomenal. It really is. I'm, you know, this like 14-year-old kid in this whatever country and I learned like all of calculus on my own. It really is. It really is phenomenal. And especially as it becomes even more multimodal when you have like video and all of that's going to be amazing. I dream about that to your point, healthcare. I dream about all of it.
The one existential question I don't think we're asking enough, and I hope you will, and maybe you have been asking it though, is how do we redefine the purpose of humankind once AI has effectively supplanted all of these things? Because whether you like it or not, throughout history, you realize our purpose has often defined our progress, right?
There's a time when our purpose was just religion. And so for good and bad, if you think about it, religion was really great at getting people to think and move in a certain direction beyond themselves. And they went like, this is my purpose. I wake up to serve God, whichever God you were thinking of. I wake up to serve God. I wake up to please God. I wake up. And it makes humans…
I think one, it makes them feel like they're moving towards something and two, it gives them a sense of community and belonging. And I feel like as we move into a world where AI removes this, the one thing I hope we don't forget is how many people have tied their complete identities to what they do versus who they are. And once we take that away, when you don't have a clerk, when you don't have a secretary, when you don't have a switchboard operator, when you don't have an assistant, when you don't have a factory worker, when you don't have all of these things,
You know, we've seen what happens in history oftentimes. It's like radicalism pops up. There's a mass backlash. Like, have you thought about that? Is there a way you can intercept that before it happens? How would you describe what our purpose is right now? I think right now our purpose is it's survival tied to the generation of income in some way, shape, or form because that's how we've been told survival works, right? You have to make money in order to survive. But we've seen that there have been pockets in time where people
That has been redefined. France has a great example where they had, and I think they still have a version of it, but the artist's fund where they went, we'll pay you as an artist to just make things. Just make France look beautiful. And that was beautiful. I know you're a fan of UBI, for instance. Yeah, we shouldn't go before you talk about that. Well, I don't think people's survival should be tied to their willingness or ability to work. I think that's like a waste of human potential. Yeah, I agree with you completely. I think...
Wait, let me ask you this before you go. It's like, why do you think universal basic income is so important? Because you don't waste your time or your money on things you don't believe in. And you spend a lot of time and money on universal basic income. I mean, the last I saw was like, there's like a $40 million project that you're a part of. $60 million. I don't think universal basic income is a complete solution, of course, to the challenges in front of us. But I do think that like,
poverty. It's just inarguably a good thing to do. I think better redistribution of resources will lead to a better society for everyone. But I don't think giving away money is the key part of this. Like, giving away tools and giving
giving away governance, I think it is more important. Like people want to be architects of the future. I think as much as I could say, there's been a consistent thread of meaning or, or of like a mission for humanity. I think it is like, you know, survive and thrive for sure on an individual basis. But, but collectively, like,
We do have an emergent collective desire to make the future better. Yes. Now, we get off track lots of times, but the human story is like, let's make the future better. And that is technology. That is governance. That's the way we treat each other. That's like going off to explore the stars. That's understanding the way the universe. It's whatever it is. And I have so much confidence that that is so deep in us. No matter what tools we get, that base desire, that mission of humanity –
as a species and as individuals, that's not going to go anywhere. So I'm super optimistic about what the world looks like two generations from now. But what you got at is really important, which is people who
are, you know, already in their careers and actually pretty happy and don't want change. Yeah. And change is coming. One thing we've seen with previous technological revolutions is in about two generations, it seems like society and people can adapt to any amount of job turnover. Right. But not in 10 years, certainly not in five years. Right. And we're going to go face that, I think, to some degree. As we said earlier, I think it'll be slower,
than people think, but still faster than society has had to deal with in the past. And what that's going to mean and how we have to adapt through that, I'm definitely a little afraid of.
we're going to have to confront it. And I assume we'll figure it out. I'm confident we'll figure it out. And I'm also confident that like you give our children and grandchildren better tools than we had. And they are just going to do things that absolutely astonish us. And I hope they feel like horrible about how bad we all had it. Like, I hope the future is just so amazing and this human spirit and desire to like go off and figure it out and express ourselves and design a better and better world and way beyond the world. Um,
I think that's wonderful. I'm really happy about that. And I think in some sense, we shouldn't make too much of this like little thing. You know that scene in Star Wars where one of the bad guys is like, don't be too impressed. Oh, I think it's Vader. It's like, don't be too impressed with this technological terror you've created. It's like nothing compared to the power of the force. Yes. I do feel that way about AI in some important sense, which is like, we shouldn't be too impressed with this. Like the human spirit will see us through and is much bigger than any one technological revolution. Yeah.
I mean, it's a beautiful message of hope. I hope you're right because I love the technology. But it will be choppy. No, you know what? And the one thing I would leave with you, Sam Altman, as Times CEO of the Year and one of the people of the year, I think you'll continue to be that, especially in this role, because of how much impact OpenAI and AI itself are going to have on us. One thing I would implore you to have is
Continue to remember that feeling you had when you were fired as you're creating a technology that's going to put many people in a similar position. Because I see you have that humanity in you, and I hope as you create, you'd constantly be thinking about that. You know what I did Saturday morning, like early Saturday morning when I couldn't sleep? I wrote down what can I learn about this that will help me be better when fired.
other people go through a similar thing and blame me like I'm blaming the board right now. And have you figured it out? A lot of, I mean, there's a lot of useful like single lessons, but the empathy I gained out of this whole experience and my like recompilation of values for sure was a blessing in disguise. Like it was at a painful cost, but I'm happy to have had the experience in that sense.
Well, Sam, thank you for the time. Thank you. Really appreciate it. I hope we do chat in a year about all the new advancements. You should definitely come do that. That'll be a fun one. I will. Definitely, man. All right, cool. Thank you. Thank you.
What Now with Trevor Noah is produced by Spotify Studios in partnership with Day Zero Productions, Fullwell 73, and Odyssey's Pineapple Street Studios. The show is executive produced by Trevor Noah, Ben Winston, Jenna Weiss-Berman, and Barry Finkel. Produced by Emmanuel Hapsis and Marina Henke.
Music, mixing and mastering by Hannes Braun. Thank you so much for taking the time and tuning in. Thank you for listening. I hope you enjoyed the conversation. I hope we left you with something. Hopefully we'll see you again next week. Same time, which is whenever you listen. Same place, which is wherever you listened. Next Thursday, all new episode. What now?