We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Sam Altman on the future of AI and humanity

Sam Altman on the future of AI and humanity

2025/1/7
logo of podcast WorkLife with Adam Grant

WorkLife with Adam Grant

AI Deep Dive AI Insights AI Chapters Transcript
People
A
Adam Grant
S
Sam Altman
领导 OpenAI 实现 AGI 和超智能,重新定义 AI 发展路径,并推动 AI 技术的商业化和应用。
Topics
Sam Altman: AI在创造力、同理心、判断力和说服力方面超越人类的速度比他预期的要快。最新的AI模型在几乎所有方面都比他聪明,但这并没有真正影响他的日常生活。AI对日常生活的改变是渐进的,长期来看才会产生颠覆性的影响。 Adam Grant: 人类正经历一个历史性的转变,不再是地球上最聪明的东西。未来人类将更重视敏捷性而非能力,连接点比收集事实更重要。新的工具会提升人们的能力和期望,人们会学习如何完成更复杂、更有影响力的事情。

Deep Dive

Key Insights

What did Sam Altman learn from being fired and rehired at OpenAI?

Sam Altman learned the importance of clear communication during and after such processes. He also gained pride in seeing the executive team operate effectively without him, which reinforced his belief in the strength of the company and its leadership.

Why does Sam Altman believe humans will still value human connection over AI interactions?

Sam Altman believes that human connection fulfills a deep social need to be part of a group or society, which AI cannot replicate. He thinks that while AI can simulate empathy and provide helpful interactions, it lacks the drama, tension, and social dynamics that make human relationships meaningful.

What are Sam Altman's thoughts on the ethical implications of AI?

Sam Altman emphasizes that humans must set the rules for AI to follow, as AI cannot autonomously determine ethical boundaries. He also cautions against relying too heavily on historical analogies for AI regulation, advocating instead for systems tailored to the unique challenges and capabilities of AI.

How does Sam Altman envision the role of AI in the future economy?

Sam Altman believes AI will transform the economy by enabling humans to focus on new and better tasks. He predicts that while some jobs will disappear, humans will always find new ways to contribute, leveraging AI to enhance creativity and productivity.

What is Sam Altman's perspective on the rapid progress of AI surpassing human capabilities?

Sam Altman acknowledges that AI has already surpassed humans in many areas, such as creativity, empathy, and judgment. He finds it surprising but believes society will adapt, and scientific progress will accelerate as a result.

What does Sam Altman hope for the next generation in a world with advanced AI?

Sam Altman hopes for a world of abundance and prosperity where people can live more fulfilling lives. He envisions a future where AI enhances human potential and improves the quality of life for everyone.

Shownotes Transcript

Translations:
中文

Hello, my name is Laura Beyer. I'm the head of brand partnerships at TED.

I'm also a graduate of Georgetown University's McDonough School of Business, where I joined a diverse and globally connected network of business leaders dedicated to building a meaningful legacy. My transformative time inside and outside the classroom provided me with the knowledge and skills to address complex issues and identify new opportunities in the workplace. I engaged with the Georgetown community in ways that sharpened my strategic, analytical, and communication skills.

I'm now connected with accomplished alumni who support one another's personal and professional journeys. When I finished my master's program, I was ready to excel in business and make an impact on society, which I've been able to accomplish here at TED. You can earn a master's degree that fits your future. Build your legacy with Georgetown McDonough. Visit msb.georgetown.edu slash TED.

You should celebrate yourself every day, but some days you should celebrate with jewelry. Whether you want to commemorate an unforgettable moment or just bring some added sparkle to your collection, Blue Nile can offer you expert guidance and a wide assortment of jewelry of the highest quality at the best price.

Go to BlueNile.com today and experience the ease and convenience of shopping Blue Nile, the original online jeweler since 1999. That's BlueNile.com. BlueNile.com. My dad works in B2B marketing. He came by my school for career day and said he was a big ROAS man. Then he told everyone how much he loved calculating his return on ad spend.

My friend's still laughing at me to this day. Not everyone gets B2B, but with LinkedIn, you'll be able to reach people who do. Get $100 credit on your next ad campaign. Go to linkedin.com slash results to claim your credit. That's linkedin.com slash results. Terms and conditions apply. LinkedIn, the place to be, to be. One of the surprises for me about kind of this trajectory OpenAI has launched onto since the launch of ChatGPT is how many things can go wrong by one o'clock in the afternoon.

Hey everyone, it's Adam Grant. Welcome back to Rethinking, my podcast on the science of what makes us tick with the TED Audio Collective. I'm an organizational psychologist, and I'm taking you inside the minds of fascinating people to explore new thoughts and new ways of thinking.

My guest today is Sam Altman, CEO and co-founder of OpenAI. Since Sam and his colleagues first streamed up ChatGPT, a lot has changed. You and I are living through this once-in-human-history transition where humans go from being the smartest thing on planet Earth to not the smartest thing on planet Earth.

The exponential progress of AI has made me rethink many of my assumptions about what's uniquely human and raise far more questions than answers. Since the source code is a black box, I figured it was time to go to the source himself. Having crossed paths with Sam at a few events, I've appreciated his willingness to think out loud instead of just sticking to scripted talking points, even when his opinions are unpopular.

I suspect that in a couple of years, on almost any topic, the most interesting, maybe the most empathetic conversation that you could have will be with an AI. Sam Altman does his own tech check. How did that happen? Ah, you know, I don't know. It's fine. There's no handler here. Sometimes. I have to start where I'm sure many conversations have kicked off over the past year, which is, what did it feel like to be fired from your own company? This like surreal haze, the constipation,

Confusion was kind of the dominant first emotion. Then they were like, I went through everything. But confusion was the first one. And then? Then like frustration, anger, sadness, gratitude. I mean, it was everything. Wow. That 48 hours was like a full range of human emotion. It was like impressive in the breath. What did you do with those emotions in that 48 hours? Honestly, there was so much due just like tactically that there was not a lot of time for like

dealing with any emotions. So in those 48 hours, not much. And then it was like hard after when the dust was settling in. I had to like get back to work in the midst of all of this. I remember Steve Jobs saying years later after he was forced out of Apple that it was awful tasting medicine, but I guess the patient needed it.

Is that relatable in any way or is this situation just too different? Maybe it hasn't been long enough. I have not reflected deeply on it recently. I think it was so different from the Steve Jobs case and all of these ways. And it was also just so short. The whole thing was like totally over in five days. It's like very strange fever dream and then like back to work picking up the pieces. I guess five days versus a decade is a slightly different learning curve. Right. What did you learn lesson wise? I actually, maybe I was wrong. Maybe it was only four days. I think it was four days.

I learned a bunch of stuff that I would do differently next time about how we communicated during and after that process and like the need to just sort of be direct and clear about what's happening. I think there was this like cloud of suspicion over OpenAI for a long time that we could have done a better job with. I knew I worked with great people, but seeing how good the team was in a crisis and in a stressful situation with uncertainty, one of the proudest moments for me was watching the executive team kind of operate without me.

for a little while and knowing that any of them would be perfectly capable of running the company. And I felt a lot of pride both about picking those people, about teaching them to whatever degree I did, and just that the company was in a very strong place. I'm surprised to hear you say that. I had assumed your proudest moment would have been just the sheer number of employees who stood behind you. I thought as an organizational psychologist, that was staggering to see the outpouring of loyalty and support from inside.

It did feel nice, but that's not the thing that sticks in. I remember feeling very proud of the team, but not for that reason. Well, I guess that's also very Jobsian then when he was asked what his proudest achievement was. It wasn't the Mac or the iPod or the iPad. It was the team that built those products.

I don't do the research. I don't build the products. I make some decisions, but not most of them. The thing I get to build is the company. So that is certainly the thing I have pride of authorship over. So what do you actually do? How do you spend your time? It's a great question. On any given day, it's pretty different and fairly...

chaotic. Somehow the early mornings are never that chaotic, but then it often like all goes off the rails by the afternoon and there's all this stuff that's happening and you're kind of in reaction mode and firefighting mode. So I've learned to get the really important things done early in the day. I spend the majority of my time thinking about research and the products that we build and

and then less on everything else. But what that could look like at any given time is very different. So one of the things that I've been very curious about as I've watched you turn the world upside down in the last couple of years is what's going to happen to humans? I've been tracking what I think is the most interesting research that's been done so far. And humans...

Humans are losing a lot faster than I hoped. A lot faster. So I think we're already behind on creativity, on empathy, on judgment, on persuasion. And I want to get your reactions to some data points in each of those areas. But first, just your commentary on the overall. Are you surprised by how quickly AI has surpassed a lot of human capabilities? Our latest model feels smarter than me.

in almost every way. And it doesn't really impact my life. - Really? - I still care about the same kinds of things as before. I can work a lot more effectively. I assume as society digests this new technology, society will move much faster. Certainly scientific progress, I hope, will move much faster.

And we are coexisting with this amazing new artifact, tool, whatever you want to call it. But how different does your day-to-day life feel now from a few years ago? Kind of not that different. I think that over the very long term, AI really does change everything. But I guess what I would have naively thought a decade ago is the day that we had a model as powerful as our most powerful model, now everything was going to change.

And now I think that was a naive take. I think this is the standard. We overestimate change in the short run and underestimate it in the long run, right? Exactly. So you're living a version of that. Eventually, I think the whole economy transforms. We'll find new things to do. I have no worry about that. We always find new jobs, even though every time we stare at a new technology, we assume they're all going to go away. It's true that some jobs go away, but we find so many new things to do and hopefully so many better things to do. I

I think what's going to happen is this is just the next step in a long unfolding exponential curve of technological progress. I think in some ways the AI revolution looks to me like the opposite of the internet because back then people were running companies. They didn't believe that the internet was going to change the world and their companies died because they didn't make the changes they needed to make. But the people who bought in, it was really clear what the action implications were. I

Like, I need to have a functioning website. I need to know how to sell my products through that website, right? It was not rocket science to adapt to the digital revolution. What I'm hearing right now from a lot of founders and CEOs is the reverse, which is everybody believes that AI is game-changing.

And nobody has a clue what it means for leadership, for work, for organizations, for products and services. They're all in the dark. In that sense, it's more like the industrial revolution than the internet revolution. There are huge known unknowns of how this is going to play out. But I think we can say a lot of things about how it is going to play out too. I want to hear those things. A couple of hypotheses that I have. One is that we're going to stop...

valuing ability and start valuing agility in humans. There will be a kind of ability we still really value, but it will not be raw intellectual horsepower to the same degree. And what do you think the new ability is that matters? I mean, the kind of dumb version of this would be figuring out what questions to ask will be more important than figuring out the answer. That's consistent with what I've seen even just in the last couple of years, which is we used to put a premium on how much knowledge you had collected in your brain and

If you were a fact collector, that made you smart and respected. And now I think it's much more valuable to be a connector of dots than a collector of facts, that if you can synthesize and recognize patterns, you have an edge. You ever watch that TV show Battlestar Galactica? One of the things they say again and again in the show is, all this happened before, all this will happen again. And when people talk about the AI revolution, it does feel different to me in some super important qualitative ways. But also it's...

It reminds me of previous technological panics. When I was a kid, this thing came out, new thing launched on the internet. I thought it was cool. Other people thought it was cool. It was clearly way better than the stuff that came before. I was not quite old enough yet for this to happen directly to me, but the older kids told me about it. The teachers started ban

banning the Google because... Did they call it the Google? The Google. If you could just look up every fact, then what was the purpose of going to history class and memorizing facts? We were going to lose something so critical about how we teach our children and what it means to be a responsible member of society. And if you could just look up any fact instantly, you didn't even have to like fire up the combustion engine, drive to the library, look in the card catalog, find a book. It was just there. It

It felt unjust. It felt wrong. It felt like we were going to lose something. We weren't going to do that. And with all of these, what happens is like we get better tools, expectations go up, but so does what someone's capable of. And we just...

learn how to do more difficult, more impactful, more interesting, whatever things. And I expect AI to be like that too. If you asked someone a few years ago, A, will there be a system as powerful as O1 in 2024? And B, if an oracle told you you were wrong, and there will be, how much would the world change?

How much would your day-to-day life change? How would we face an existential risk or whatever? Almost everybody you asked would have said definitely not on the first one. But if I'm wrong and it happens, like we're pretty fucked on the second. And yet this amazing thing happened and here we are. So...

In the realm of innovation, there's a new paper by Aidan Toner Rogers, which shows some great news for R&D scientists, that when they're AI-assisted, they file 39% more patents, and that leads to 17% more product innovation. And a lot of that is in radical breakthroughs, novel chemical structures being discovered.

And the major gains are for top scientists, not bottoms. There's very little benefit if you're in the bottom third of scientists, but the productivity of the top ones almost double. And that doubling seems to be because AI automates a lot of idea generation tasks. And it allows scientists to focus their energy on idea evaluation where the great scientists are really good at recognizing a promising idea and the bad ones are vulnerable to false positives. So that's all good news, right? Incredible.

incredible unlocking of scientific creativity. But it comes with a cost, which is in the study, 82% of scientists are less satisfied with their work. They feel they get to do less creative work and their skills are underutilized. And it seems like humans in that case are being reduced to judges as opposed to creators or inventors.

I would love to know how you think about that evidence and what do we do about that? I have two conflicting thoughts here. One of the most gratifying things ever to happen at OpenAI, for me personally, is as we've released these new reasoning models, we give them to great legendary scientists, mathematicians, coders, whatever, and ask what they think. And hearing their stories about how this is transforming their work and they can work in new ways,

I have certainly gotten the greatest professional joy from having to really creatively reason through a problem and figure out an answer that no one's figured out before. And when I think about AI taking that over, if it happens that way, I do feel some sadness. What I expect to happen in reality is just there's going to be a new way we work on the hard problems.

And it's being an active participant in solving the hardest problems that brings the joy. And if we do that with new tools that augment us in a different way, I kind of think we'll adapt, but I'm uncertain. What does that look like in your job right now? How do you use ChatGPT, for example, in solving problems that you face at work? Honestly, I use it in the boring ways. I use it for like, you know, help me process all of this email or help me summarize this document or just the very boring things. Yeah.

It sounds like then you're hopeful that we'll adapt in ways that allow us to still participate in the creative process. I am hopeful. That's so deeply the human spirit and the way I think this all continues kind of no matter what. But it will have to evolve and it will be somewhat different. Another domain where I expected humans to have an edge much longer than we've stuck it out so far is empathy.

My favorite experiments that I've read so far basically show that if you're having a text conversation and you don't know whether it's a human or a chat GPT, and then afterward you're asked, how seen did you feel? How heard did you feel? How much empathy and support did you get? You feel that you got more empathy and support from AI than you did from a human, unless we tell you it was AI and then you don't like it anymore. Right. I look at that evidence as a psychologist and I have a couple of reactions. One is...

I think it's not that AI is that good at empathy. It's that our default as humans is pretty bad and poor. We slip into conversational narcissism way too quickly where somebody tells us a problem and we start to relate it to our own problem as opposed to showing up for them. So I think maybe that's just an indictment of human empathy having a poor baseline. But also, I wonder how long...

I don't want it, if I know it's from an AI, is going to last as we start to humanize and anthropomorphize this tech more and more. Let me first talk about the sort of general concept of people sometimes preferring the actual output of something if it's AI until they're told that it's AI and then they don't like it. You see that over and over again. I saw a recent study that even among people who claimed that they really hated AI art, the

the most that the scale that you choose, they still selected more output of AI than humans for the pieces of art they liked the most until they were told which was AI and which wasn't. And then, of course, it was different. We could pick many other examples, but this trend that AI has in many ways caught up to us, and yet we are hardwired to care less.

about humans and not AI, I think is a very good sign. We're all in speculation here. And so I'll say I have a very high uncertainty on all of this. But although you will probably talk more to an AI than you do

you do today, you will still really care about when you're talking to a human. That this is something very deep in our biology and our evolutionary history and our social functioning, whatever you want to call it. Why do you think we will still want human connection? It sounds like a version of the Robert Nozick argument that led to the matrix of human

People preferring real experience over sort of simulated pleasure. Do you think that's what we're craving? We just want the real human connection even if it's flawed and messy? Which, of course, AI is going to learn to simulate too. I think...

you'll find very quickly that talking to a flawless, perfectly empathetic thing all of the time, you miss the drama or the tension or there'll be something there. I think we're just so wired to care about what other people think, feel, how they view us. And I don't think that translates to an AI. I think you can have a conversation with an AI that is helpful and that you feel validated. And it's a good kind of entertainment in a way that playing a video game is a good kind of entertainment. But I don't think it fulfills the sort of

social need to be part of a group in a society in a way that is going to register with us. Now, I might be wrong about this. And maybe AI can so perfectly hack our psychology that it does. And I'll be really sad if that's the case. Yeah, me too. You're right. It's hard for AIs to substitute for belonging. It's also hard to get status from a bot.

To feel important or cool or respected in ways that we rely on other human eyeballs and ears for. That was kind of what I was trying to get at. I can imagine a world soon where AIs are just like unbelievably more capable than us and doing these amazing things. And

When I imagine that world and I imagine the people in it, I imagine those people still caring about the other people quite a lot. Still thinking about relative status and sort of these silly games relative to other people quite a lot. But I don't think many people are going to be measuring themselves against...

you know, what the AI is doing and capable of. So one of the things that I've been really curious about is in a world where information is increasingly contested and facts are harder and harder to persuade people of. We see this, for example, in the data on conspiracy theory beliefs. People believe in conspiracies because it makes them feel special and important. And like they have access to knowledge that other people don't. It's not the only reason, of course, but it's one of the driving reasons.

And what that means is it's really hard for another human to talk them out of those beliefs because they're kind of admitting that they're wrong. And I was fascinated by a recent paper. This is Costello, Pennycook, and Rand. They showed that if you have a single conversation with an AI chatbot, it's

It can, even months later, basically get people to unbelieve a bunch of their conspiracy theories. It starts by essentially just targeting a false claim that you believe in and debunking it. And I think it works in part because it's responsive to the specific reasons that you have attached to your belief. And in part because nobody cares about looking like an idiot in front of a machine like they do a human. And nonstop.

Not only do people, I think about 20% of people let go of their absurd conspiracy beliefs, but also they let go of some other beliefs that the AI didn't even target. And so I think that that door opening is very exciting. Obviously, this can be used for evil as well as good, but

I'm really curious to hear about what your take is on this newfound opportunity we have to correct people's misconceptions with these tools. Yeah, there are people in the world that can do this, that can kind of expand our mind in some way or other. It's very powerful. There's just not very many of them and it's a rare privilege to get to talk to them.

If we can make an AI that is like the world's best dinner party guest, super interesting, knows about everything, incredibly interested in you and takes the time to like understand everything

where they could push your thinking in a new direction, that seems like a good thing to me. And I've also had this experience with AI where I had the experience of talking to a real expert in an important area and that changing how I think about the world, which for sure, there is some human that could have done that, but I didn't happen to be with him or her.

right then. It also obviously raises a lot of questions about the hallucination problem and accuracy. As an outsider, it's really hard for me to understand why this is such a hard problem. Can you explain this to me in a way that will make sense to somebody who's not a computer scientist? Yeah, I think a lot of people are still stuck back in the GPT three days, ancient history back in 2021 when none of this stuff worked really. It did hallucinate a lot. If you use the current chat GPT, it still hallucinates some for sure. But I think it's like surprising enough

then it's generally pretty robust. We train these models to make predictions based off of all the words they've seen before. There's a bunch of wrong information in the training set. There's also sometimes cases where the model fails to generalize like it should and teach

teaching the model when it should confidently express that it doesn't know versus, you know, like make its guess is still an area of research, but it's getting a lot better. And with our new reasoning models, there's a big step forward there too. I've prompted ChatGPT in various iterations, like, is this true? Can you please make sure this is an accurate answer? That should be built in as, you know, as a required step in the iteration. So is

So is that where we're heading then? That that just becomes an automatic part of the process? I think that will become part of the process. I think there will be a lot of other things that make it better too, but that will be part of the process. There's some brand new research and there have been a bunch of these kinds of studies over the last year or two, but the one that sort of blew my mind this past week was that when you compare AI alone to doctors alone, of course, AI wins, but AI also beats doctor AI teams.

And my read of that evidence is that doctors aren't benefiting from AI assistance because they override the AI when they disagree. You see versions of this throughout history. Like when AI started playing chess, there was a time where humans were better. Then there was a time when the AIs were better. And then for some period of time, I forget how long, the AI plus humans working together were better than AIs alone because they could sort of bring the different perspectives.

perspectives. And then there came a time where the AI was again better than an AI plus a human because the human was overriding and making mistakes where they just didn't see something. If you view your role as to try to override the AI in all cases, then it turns out not to work. On the other hand, the second thing, I think we're just early in figuring out how humans and AI should work together. The AI is going to be a better diagnostician than the human doctor. And the

that's probably not what you want to fight. But there will be a lot of other things that the human does much better, or at least that the people, the patients, want a person to be doing. And I think that'll be really important. I've been thinking about this a lot. I'm expecting a kid soon. My kid is never going to grow up being smarter than AI. The world that, you know, kids that are about to be born, the only world they will know is a world with AI in it. And that'll be natural. And if

of course it's smarter than us. Of course it can do things we can't, but also who really cares? I think it's only weird for us in this one transition time. In some ways, that's a force for humility.

which I think is a good thing. On the other hand, we don't know how to work with these tools yet. And maybe some people are getting a little too dependent on them too quickly. You know, I can't spell complicated words anymore because I just trust that autocorrect will save me. I feel fine about that. It's easy to have moral panics about these things. Even if people are more dependent on their AI to like help them express thoughts, maybe that is just the way of the future. I've seen students who don't want to write a paper without having ChatGPT handy because they

They've gotten rusty on the task of rough drafting and they're used to outsourcing a lot of that and then having raw material to work with as opposed to having to generate something in front of a blank page or a blinking cursor.

And I do think there is a little bit of that dependency that's building. Do you have thoughts on how we prevent that? Or is that just the future and we ought to get used to it? I'm not sure that is something we should prevent. For me, writing is outsourced thinking and very important. But as long as people replace a better way to do their thinking with a new kind of writing, that seems directionally fine. One of the sad things about getting more well-known is if I don't phrase everything perfectly for very little benefit to me or to open AI, I just like open up.

a ton of attacks or whatever. And that is a bummer.

I do think that is a privilege you lose, the ability to just riff and play with ideas publicly and be partially wrong or have incomplete thoughts. Mostly wrong. Mostly wrong with some gems in there. I mean, that being said, some of us are grateful that you're a little more circumspect than some of your peers who don't exercise any self-reflection or self-control. Well, that's a different thing. There's also something about just being a thoughtful, selfless,

somewhat careful person, which yes, I think more people should do. The thing I think is really silly, a reasonably common workflow is that someone will write the bullet points of what they want to say to somebody else, have ChatGBT write it into a nice multi-paragraph email, send it over to somebody else. That person will then put that email in ChatGBT and say, tell me what the three key bullet points are.

And so I think there is some vestigial formality of writing and communication or whatever that probably doesn't still have a lot of value. And I'm fine to get to a world where the social norms evolve that everybody can just send each other the bullet points. I really want a watermark or at least some internal memory where ChatGPT can say back, hey, this was already generated by me. And you should go back and tell this person you wanted bullet points so that you all can communicate more clearly in the future. In part...

what's going on is a lot of people are slow to adapt to the tools. We are seeing some really interesting human ingenuity. So the evidence that jumps to mind for me is a study by Sharon Parker and her colleagues. This is in the realm of robotic technology. So they go into a manufacturing company that's essentially starting to replace humans with robots. And in

Instead of getting panicked that people are no longer going to have jobs, a bunch of employees say, well, we need to find a unique contribution. We need to have meaning at work. And they get that by outsmarting the robots. They study the robots. They figure out what they suck at. And then they're like, okay, we are going to make that our core competence. I think the scary thing with O1 and the advances in reasoning is that a lot of the skills that we thought would differentiate us last year are now already obsolete.

Like the prompting tricks that a lot of people were using in 2023 are no longer relevant and some of them are never going to be necessary again. So what are humans going to be for in 50 or 100 or 1,000 years? No one knows. But I think the more interesting answer is...

What is a human useful for today? And I would say being useful to other people. And I think that'll keep being the case. I think that someone said to me, this was Paul Buchheit many, many years ago that really stuck with me as he had been thinking and thinking and thinking. This is like before OpenAI started.

He thought that someday there was just going to be human money and machine money, and they were going to be completely separate currencies, and one wouldn't care about the other. I don't expect that to be literally what happens, but I think it's a very deep insight. Fascinating. I've never thought about machines.

Machines having their own currency. You will be thrilled that the AI has invented all of this science for you and cured disease and, you know, made fusion work and just impossible triumphs we can't imagine. But will you care about what an AI does versus what some friend of yours does or some person running some company does? I don't know. Probably not that much. No.

No. Like maybe some people do. Maybe there's like some really weird cults around particular AIs. And I will bet we'll be surprised the degree to which we're still very people focused. Hello, my name is Laura Beyer. I'm the head of brand partnerships at TEDD.

I'm also a graduate of Georgetown University's McDonough School of Business, where I joined a diverse and globally connected network of business leaders dedicated to building a meaningful legacy. My transformative time inside and outside the classroom provided me with the knowledge and skills to address complex issues and identify new opportunities in the workplace. I engaged with the Georgetown community in ways that sharpened my strategic, analytical, and communication skills.

I'm now connected with accomplished alumni who support one another's personal and professional journeys. When I finished my master's program, I was ready to excel in business and make an impact on society, which I've been able to accomplish here at TED. You can earn a master's degree that fits your future. Build your legacy with Georgetown McDonough. Visit msb.georgetown.edu slash TED.

Do you feel overwhelmed when it comes to makeup? Charlotte Tilbury has bottled 30 years of artistry into easy-to-choose, easy-to-use beauty products. You can't go wrong. They are flattering for everyone, everywhere. That's the reason why she is the Queen of Glow.

Take her iconic Hollywood Flawless Filter. With one product, you can blur, smooth, and illuminate the look of skin. Plus, it's skincare-infused for clinically proven hydration for up to 24 hours. It's like nothing else. Charlotte's products are magic confidence bottled. You can use code CTPODCAST15 for 15% off on charlottetillbury.com. Plus, new account holders get free delivery.

This ad is brought to you by Charlotte Tilbury, USA customers only and valid until March 2nd, 2025. For full terms, conditions, and exclusions, see the Charlotte Tilbury website. Hey, I'm Ryan Reynolds. Recently, I asked Mint Mobile's legal team if big wireless companies are allowed to raise prices due to inflation. They said yes. And then when I asked if raising prices technically violates those onerous two-year contracts, they said, what the f*** are you talking about, you insane Hollywood a**hole?

So to recap, we're cutting the price of Mint Unlimited from $30 a month to just $15 a month. Give it a try at mintmobile.com slash switch. $45 upfront payment equivalent to $15 per month. New customers on first three-month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes. See details.

Okay, I think it might be time for a lightning round. This is me in like GPT-4 mode instead of 01 mode where I just have to like one shot, you know, as quickly as I can up with the next token. First question is, what's something you've rethought recently on AI or changed your mind about? I think a fast takeoff is more possible than I thought a couple of years ago. How fast?

feels hard to reason about, but something that's in like a small number of years rather than a decade. Wow. What do you think is the worst advice people are given on adapting to AI? AI is hitting a wall, which I think is the laziest fucking way to try to not think about it and just, you know, put it out of sight, out of mind. What's your favorite advice on how to adjust? Or what advice would you give on how to adapt and succeed in an AI world?

This is so dumb, but the obvious thing is like just use the tools. One thing that OpenAI does that I think is really cool, we put out the most powerful model that we know of that exists in the world today, O1. And anybody can use it if you pay us 20 bucks a month. If you don't want to pay us 20 bucks a month, you can still use a very good thing. It's out there, like the leading edge, the most capable person in the world and you can access the exact same frontier. And I think that's awesome. And so go use it and figure out what you like about it, what you don't, what you think is going to happen with it.

What's your hottest hot take or unpopular opinion on AI? That it's not going to be as big of a deal as people think, at least in the short term. Long term, everything changes. I kind of genuinely believe that we can launch the first AGI and no one cares that much. People in tech care and philosophers care. Those are the two groups I've heard react consistently. And even then, they care, but like,

20 minutes later, they're thinking about what they're going to have for dinner that night. What's the question you have for me as an organizational psychologist? Oh, what advice do you have for open AI about how we manage our collective psychology as we kind of go through this crazy superintelligence takeoff? Like, how do we keep the people here sane, for lack of a better word? We're not really in the like superintelligence part of the takeoff. But I imagine as we go through that, it'll just feel like this unbelievably high stakes, immensely stressful thing. I mean, even now, as we're in sort of the AGI ramp, it feels a little bit like that. I

I think we need much more organizational resilience for what's to come. And when you think about organizational resilience, what does that look like? Does that mean people are not as stressed as they're likely to become? Does that mean they're able to roll more quickly with change than they might naturally? Good decisions in the face of incredibly high stakes uncertainty and also adaptability as the

the facts on the ground and thus the actions that we need to consider or to take change at a very rapid rate. I think for me, the place to start on that is to draw a two by two and ask everybody at OpenAI to think about how consequentially each choice they make is, how high are those stakes, and then how reversible is each choice? Are they walking through a revolving door or is it going to lock behind them? And I think where you really have to slow down

and do all of your thinking and rethinking upfront is the highly consequential irreversible decisions because they really matter and you can't undo them tomorrow. I think the other three quadrants, it's fine to act quickly, experiment, pilot, stay open to doubting what you think, but that quadrant is where it's really important to get it right. And that's where I want people to put their best thinking and probably their best prompting.

Makes sense. So I want to ask you about something you wrote. You did a blog about how to be successful. So long ago. It was a long time ago. I don't have that loaded in memory anymore. That's okay. I have it right here. There was one section of it that I thought was particularly fascinating on self-belief. So I'll quote you to you here. You wrote, self-belief is immensely powerful. The most successful people I know believe in themselves almost to the point of delusion.

cultivate this early. As you get more data points that your judgment is good and you can consistently deliver results, trust yourself more. Do you still agree with that? I think so. It's hard to overstate. When we were starting OpenAI, we believed this thing. That was like right about the time of like maximum skepticism in OpenAI relative to what on the outside relative to what we believed inside. And I think my most important contribution to the company in that phase was that I just kept reminding people like, look,

the external world hates anything new, hates anything that like might go in a different direction than established belief. And so people are saying all of these

crazy negative things about us. And yet we have this incredible progress. And I know it's early and I know we have to suspend disbelief to believe it'll keep scaling, but it's been scaling. So let's push it ridiculously far. And now it seems so obvious. But at the time, I truly believe that had we not done that, it might not have happened for a long time because we were the only people that had enough self-belief

to go do what seemed ludicrous, which was to spend a billion dollars scaling up a GPT model. So I think that was important. I think it's all true. I think it's also scary because those same people are the ones when they believe in themselves to the point of delusion or almost delusion who make terrible decisions outside of their domains of expertise. And I think that's

I think if I were going to modify what you wrote, I would say as you get more data points that your judgment is good in a given domain. Yes, yes. Then you should trust yourself in that domain more. That would have been a much better way to phrase it. I don't think it's true that experience and ability doesn't generalize at all, but many people try to generalize it too much. I should have said something about like in your area of expertise, but there's nuance because I also think you should be willing to like do new things. You know, I was an investor and not an AI lab person.

executive, you know, six or seven years ago. It also really matters whether you're in a stable or dynamic environment, because you can trust your judgment that's based on intuition in a stable environment because you have subconsciously internalized patterns of the past that are still going to hold in the future. Whereas if you're in a more volatile setting, oftentimes your gut feeling is essentially trained on data that don't apply, right?

In that world, I think you want to get even more towards the like really core underlying principles that you believe in and that work for you because yeah, even more valuable. The last topic that I wanted to talk with you about is ethics. I know this is also something you've been thinking a lot about, talking a lot about. This is the domain in which most people are most uncomfortable outsourcing any kind of judgment to AI. Yeah.

Me too. Good. And I think this is where we have to rely on humans at the end of the day. I'm hearing a lot of nuclear deterrence kinds of metaphors of, okay, what we need is we need to race ahead of bad actors and then we'll have mutually assured destruction. Like, wait a minute, the arms race metaphor doesn't work here because a lot of the bad actors are not state actors and they don't face the same risk or consequences. And then also, like,

now we're going to trust a private company as opposed to elected officials. This feels very complicated and like it doesn't map. So talk to me about that and how you're thinking about the ethics and safety problems. First of all, I think humans have got to set the rules like AI can follow them and we should hold AIs to following whatever we collectively decide the rules are. But humans have got to set those. Second, I think people seem incapable not to think in historical analogy. And I understand that and I don't think it's all bad. But I

But I think it's kind of bad because the historical examples just are not like the future examples. So what I would encourage is...

people to ground the discussion as much as they can in what makes AI different than anything before, based off what we know right now, not kind of wild speculation, and then trying to design a system that works for that. One thing that I really believe is deploying AI as a tool that significantly increases individual ability, individual will, whatever you want to call it, is the

a very good strategy for our current situation and better than one company or adversary or person or whatever kind of using all the AI power in the world today. But I will also cheerfully admit that I don't know what happens as the AIs become more agentic in the big way, not like we can go give them a task where they program for three hours, but where we can have them go off and do something very complicated that would normally require like a whole organization over many years.

And I suspect we'll have to figure out new models. Again, I don't think history will serve us that well. No, it frankly hasn't in the software world. I think that any other technology this powerful is regulated in the US. And

I think it seems like the EU might be a little bit more competent when it comes to Congress. I think what the EU is doing with AI regulation is not helpful for another reason. Like, for example, when we finish a new model, we can launch it, even if it's not that powerful. We can launch it in the US well before we can launch it in the EU because there's a bunch of regulatory process. And if that means that the EU is always some number of months behind the frontier, I think they're just going to build less fluency and expertise

economic engine and understanding and kind of whatever else you want to put in that direction. It's really tricky to get the regulatory balance right. And also we clearly, in my opinion, will need some. What worries you the most when you look ahead in the next decade or so? I think just the rate of change. I really believe in the sort of human spirit of solving every problem, but we got a lot to solve pretty quickly here. One of the other things that I've been grappling with when I think about ethics and future impact is the

I thought so many digital technologies were going to be democratizing, and we thought they were going to sort of prevent or at least chip away at inequality. And very often it's been the opposite, that the rich have gotten richer because they have had better access to these tools. Now, you pointed out that O1 is pretty cheap by American standards, right?

There's still, I think, an access discrepancy. What is it going to take to change that? What does it look like for AI to be a force for good in the developing world? We've been able to drive the price per unit of intelligence down by roughly a factor of 10 every year. Can't say that for that much longer. No. But we've been doing it for a while. And...

I think it's amazing how cheap intelligence has gotten. I guess in some ways, though, that works against the problem of, well, at least right now, the only players that can afford to make really powerful models are governments and huge companies that are accountable.

For now to train it, yes. Yeah. But to use it is very different. So as you sit back and look at the last, I mean, three years, it's got to be like you've gone through a lifetime of change. It's been weird. Why are you doing this? I guess is one way to put it. I am a techno optimist and science nerd. And I think it is the coolest thing I could possibly imagine. And the best possible way I could imagine spending my work time to train.

get to be part of what I believe is the most interesting, coolest, important scientific revolution of our lifetimes. So like, what a fucking privilege. Unbelievable. And then on the kind of like non-selfish reason, I feel a sense of duty to scientific progress as a way that society progresses and evolves.

Sounds like responsibility. Sure.

With a child on the way as a soon-to-be father, what kind of world are you hoping to see for the next generation? Abundance was the first word that came to mind. Prosperity was the second. But generally just a world where people can do more, be more fulfilled, live a better life. However, we

define that for each of ourselves. All those things. Probably the same thing every other soon-to-be dad has ever wanted for his kid. I've certainly never been so excited for anything. And I think it's also like, no one should have a kid that doesn't want to have a kid, so I don't want to use the word duty here. But society is dependent on some people having some kids. At least for now. Yeah.

at least for now. I don't think I've heard you express as strongly as you did today how much you're also a believer in humans, not just in technology. And I think in some ways that's a risky place to operate. Like we've seen that with social media.

But I think it's also like it is table stakes when it comes to building technology is you have to care about and believe in. I skew optimistic, even though I try to just be accurate. But if there's being too optimistic about technology, like whatever, if you're too optimistic about humans, that could be a danger for us. If we put these tools out and we like, yeah, people will use it for way more good than bad. And we're just somehow really wrong about human nature. That would be a flaw with our strategy. But I don't believe that.

Well, fingers are crossed. Sam, thank you for taking the time to do this. I learned a lot and thoroughly enjoyed it. Thanks for having me. This was fun. Was it? You be the judge. My biggest takeaway from this conversation with Sam is that technological advances may be unstoppable, but so is human adaptation. Machines can replace our skills, but they won't replace our value or our values. Rethinking is hosted by me, Adam Grant. The show is part of the TED Audio Collective, and this episode was produced and mixed by Cosmic Standard.

Our producers are Hannah Kingsley-Ma and Asia Simpson. Our editor is Alejandra Salazar. Our fact checker is Paul Dirtman. Original music by Hansdale Sue and Allison Leighton Brown. Our team includes Eliza Smith, Jacob Winnick, Samaya Adams, Roxanne Heilash, Banban Cheng, Julia Dickerson, and Whitney Pennington-Rogers.

I get a surprising number of emails, like cold emails or something, where someone will say, I confess, I wrote this with the help of ChatGPT. And if I reply, I try to always say, no need to ever do that again. If you ever email me again, I'll take the bullet points. So that's my one little contribution to the fight. Wow. And then there's a little disclaimer at the bottom saying, this response was also written by ChatGPT. If I do that, I do disclose, but I don't. I usually just write my two bullet points back.

Hello, my name is Laura Beyer. I'm the head of brand partnerships at TEDD.

I'm also a graduate of Georgetown University's McDonough School of Business, where I joined a diverse and globally connected network of business leaders dedicated to building a meaningful legacy. My transformative time inside and outside the classroom provided me with the knowledge and skills to address complex issues and identify new opportunities in the workplace. I engaged with the Georgetown community in ways that sharpened my strategic, analytical, and communication skills.

I'm now connected with accomplished alumni who support one another's personal and professional journeys. When I finished my master's program, I was ready to excel in business and make an impact on society, which I've been able to accomplish here at TED. You can earn a master's degree that fits your future. Build your legacy with Georgetown McDonough. Visit msb.georgetown.edu slash TED.

Every idea starts with a problem. Warby Parker's was simple. Glasses are too expensive. So they set out to change that. By designing glasses in-house and selling directly to customers, they're able to offer prescription eyewear that's expertly crafted and unexpectedly affordable. Warby Parker glasses are made from premium materials like impact-resistant polycarbonate and custom acetate. And they start at just $95, including prescription lenses. Get glasses made from the good stuff.

Stop by a Warby Parker store near you. My dad works in B2B marketing. He came by my school for career day and said he was a big ROAS man. Then he told everyone how much he loved calculating his return on ad spend.

My friend's still laughing at me to this day. Not everyone gets B2B, but with LinkedIn, you'll be able to reach people who do. Get $100 credit on your next ad campaign. Go to linkedin.com slash results to claim your credit. That's linkedin.com slash results. Terms and conditions apply. LinkedIn, the place to be, to be.