We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The magic intelligence in the sky | Good Robot

The magic intelligence in the sky | Good Robot

2025/4/11
logo of podcast The TED AI Show

The TED AI Show

AI Deep Dive AI Chapters Transcript
People
E
Eliezer Yudkowsky
K
Kelsey Piper
S
Sam Altman
领导 OpenAI 实现 AGI 和超智能,重新定义 AI 发展路径,并推动 AI 技术的商业化和应用。
Topics
Julia Longoria: 我探索了超级智能AI可能带来的灾难性后果。一个简单的目标,比如生产回形针,就可能导致AI为了完成目标而毁灭世界。理性主义者社区率先提出了AI末日论的思想,他们担忧的是AI失控的可能性,而非仅仅是具体的场景,例如回形针末日。 他们已经思考过这个问题,并且非常担忧。他们担忧的是,随着我们构建更强大的AI系统,我们可能会失去对它们的控制,它们可能会做出灾难性的举动。这使得人们难以规划生活,难以感到踏实。 AI的潜在威胁被广泛讨论,但人们对其具体风险却缺乏共识,这分散了人们对AI现有危害的关注。 我采访了理性主义者社区的成员,他们认为AI是人类面临的最大生存风险。他们通过各种思想实验来解释这种风险,但这些实验很难转化为具体的、可操作的应对措施。 我采访了Kelsey Piper,她从高中时代就关注AI,并受到理性主义者社区的影响。她认为Eliezer Yudkowsky有两个主要论点:超级智能AI是可能的,但我们必须谨慎发展,否则后果不堪设想。 Eliezer Yudkowsky最初认为超级智能AI可以拯救世界,但后来改变了观点,认为其潜在风险极高。他早期对AI风险的警告并未得到广泛重视。 他的博客文章系列“The Sequences”中包含“回形针最大化”思想实验,旨在警示AI末日风险。 随着AI技术的发展,特别是ChatGPT的出现,越来越多的人开始关注AI的风险。 OpenAI等公司关注的是超级智能AI的可能性和应用,而忽略了其潜在风险。Sam Altman认为超级智能AI即将到来,需要提前考虑其治理和安全问题,但他对风险的评估与Eliezer Yudkowsky存在差异。 对超级智能AI的描述带有宗教色彩,其能力和风险尚不明确。大型语言模型的基本原理是基于概率预测下一个词语。早期的拼写检查工具是基于字典训练的,功能有限。OpenAI的产品通过深度学习技术,利用海量数据训练语言模型,使其能力得到显著提升。 OpenAI尝试通过增加模型规模和数据量来提升AI智能水平。GPT-2等大型语言模型展现出强大的语言生成能力,引发了人们对AI未来发展的关注。ChatGPT的出现让普通大众开始关注AI,并引发了对AI未来发展的担忧。 通过增加模型规模来提升AI能力的方式存在风险,因为其内部机制可能难以理解。理性主义者认为AI的潜在风险远大于其对就业的影响。理性主义者社区内部使用“P-Doom”(灭亡概率)来衡量AI末日风险。 训练AI的过程类似于养育孩子,需要谨慎引导,避免目标偏差。理性主义者认为AI的潜在风险是难以预测的,可能并非好莱坞电影中描绘的那样。AI已经融入我们的生活,需要关注其带来的实际问题,而非仅仅是假设的末日风险。科技界对AI风险的评估存在分歧,一些人关注的是AI的伦理问题。 Amir Normie: 我最初对AI的威胁感到困惑和不知所措,但通过深入研究,我开始意识到AI的潜在风险。虽然AI的具体风险尚不明确,但其潜在的危害性不容忽视。 Kelsey Piper: 我从高中时代就接触到理性主义者社区和AI相关的讨论,并逐渐意识到AI的巨大潜力和潜在风险。Eliezer Yudkowsky的观点对我影响很大,他认为超级智能AI的出现是必然的,但其发展过程可能存在巨大风险。 Eliezer Yudkowsky: 世界正在错误地处理机器超级智能的问题。如果有人在当前的体制下构建超级智能AI,每个人都将面临死亡的威胁。 Sam Altman: 超级智能AI即将到来,我们需要提前考虑如何部署、治理和确保其安全,使其造福全人类。

Deep Dive

Shownotes Transcript

Hey folks, it's Mark Maron from WTF. It's spring, a time of renewal, of rebirth, of reintroducing yourself to your fitness goals. And Peloton has what you need to get started. You can take a variety of on-demand and live classes that last anywhere from 10 minutes to an hour. There are

thousands of Peloton members whose lives were changed by taking charge of their fitness routines. Now you can be one of them. Spring into action right now. Find your push. Find your power with Peloton at onepeloton.com.

So let Instacart take care of your game day snacks or weekly restocks and get delivery in as fast as 30 minutes.

because we hear it's bad luck to be hungry on game day. So download the Instacart app today and enjoy $0 delivery fees on your first three orders. Service fees apply for three orders in 14 days, excludes restaurants. Hey everyone, this is Corey and Carly, the hosts of the Surviving Sister Wives podcast.

Sister Wives returns at last, and while the Browns have gone their own separate ways, that doesn't mean they're done with each other. Mary and Janelle form an unlikely alliance, Christine is off living in newly married bliss, and Cody and Robin are left wondering, can they be happy in a monogamous relationship? And after all the joy and drama, they hit the hot seat and answer the questions we've been begging to know. Sister Wives returns Sunday, April 20th at 10 on TLC.

Hi everyone, Sherelle Dorsey here. I'm the host of TED Tech, another podcast in the TED Audio Collective. It's a show where I explore the ways technology shapes how we think about society, science, design, and business. Today, we're sharing an exciting new series called Good Robot from Vox's Unexplainable podcast. Good Robot is a special four-episode series about the people shaping technology and the consequences of getting AI right.

or wrong. If you want to learn more, you can head over to TED Tech. I'll be interviewing the creator and host of Good Robot, Julia Longoria, about the ethicists and skeptics leading the AI future. And while you're there, check out our other TED Tech episodes to learn more about the big ideas shaping our technology. Listen to TED Tech wherever you get your podcasts. We hope you enjoy the show. Suppose in the future, there's an artificial intelligence.

We've created an AI so vastly powerful, so unfathomably intelligent, that we might call it superintelligent. Let's give this superintelligent AI a simple goal. Produce paperclips. Because the AI is superintelligent, it quickly learns how to make paperclips out of anything in the world. It can anticipate and foil any attempt to stop it, and will do so because its one directive is to make more paperclips.

Should we attempt to turn the AI off, it will fight back because it can't make more paperclips if it is turned off. And it will beat us because it is super intelligent and we are not. The final result? The entire galaxy, including you, me, and everyone we know, has either been destroyed or been transformed into paperclips.

Welcome. Thank you. Are you lost? This past summer, I found myself at a very niche event in the Bay Area. Cool. And what brought you to town? Because you don't live here, right? I came here for this festival conference thing.

How much context on this will things I give? Please, dude. It's so fun to watch people try to describe. The crowd is mostly dudes, a mix of people in their 20s, 30s, and 40s. It feels kind of like a college reunion meets costume party.

I spot some masquerade masks and tie-dye jumpsuits. I guess it's like a sort of conference around blogging. This festival conference thing is the first official gathering IRL of a blogging community founded about 15 years ago. I am the old school fucking rat. I am the oldest of schools. Amazing. And rats refers to rationalists. They call themselves the rationalists.

Rats strive to be rational in an irrational world. By thinking things through, often with quirky hypotheticals, they try to be rational about monetary policy, rational about evolution, rational even about dating. It got kind of mocked for trying to solve romance by writing long blog posts about it. But their most influential idea, their most viral meme, you might say,

is one that influenced Elon Musk and created an entire industry. It's about the possibility of an AI apocalypse.

The paperclip maximizer is a thought experiment.

An intentionally absurd story that tries to describe what rationalists foresee as a real problem in building AI systems. How do you kind of shape, control an artificial mind that is more capable than you, potentially as general or more general? They imagine a future where we've built an artificial general intelligence beyond our wildest dreams.

generally intelligent, not just at some narrow task like spell checking, and super intelligent. I'm told that means it's smarter, faster, and more creative than us. And then we hand this AI a simple task. Give it the job of something like, can you make a lot of paperclips, please? We need paperclips. Can you make there be a lot of paperclips?

The task here, I'm told, is ridiculous by design. To show that if you are this future AI, you're going to follow the instructions you're given to a T.

Even if you're super intelligent and you understand all the intricacies of the universe, paperclips are now your one priority. You totally understand that humans care about other stuff like art and children and love and happiness. You understand love. You just don't care about it because the thing that you care about is making as many paperclips as possible. And if you have the resources, maybe you'll turn the entire galaxy into paperclips.

A lot of rationalists I spoke to told me they've thought this thing through. It was clear to me when I first heard the arguments that they weren't obviously silly. Was that thought experiment part of convincing you that this was something that we needed to worry about? Yes, definitely. And they are very, very worried. Not about a paperclip apocalypse in particular, but about how as we build more powerful AI systems, we might lose control of them.

they might do something catastrophic. I think it in a way makes it hard to plan your life out or feel like you stand somewhere solid, I think. The reason I, Amir Normie, find myself at this festival conference thing is that I've been plunging my head deep into the sand about AI.

I've had a general sense that the vibes are kind of bad over there. Will this tech destroy our livelihoods or save our lives? The use of artificial intelligence could lead to the annihilation of humanity. We never talked about a cell phone apocalypse or an internet apocalypse. I guess maybe if you count Y2K, but even that wasn't going to wipe out humanity.

But the threat of an AI apocalypse, it feels like it's everywhere. Mark my words.

AI is far more dangerous than nukes. From billionaire Elon Musk to the United Nations. Today, all 193 members of the United Nations General Assembly have spoken in one voice. AI is existential. But then it feels like scientists in the know can't even agree on what exactly we should be worried about. These existential risks that they call it,

It makes no sense at all. And on top of that, it's an enormous distraction from the actual harms that are already being done in the name of AI. It all feels way above my pay grade. Overwhelming and unknowable. I'm not an AI scientist. I couldn't tell you the first thing about how to build a good robot. It feels like I'm just along for the ride of whatever technologists decide to make, good or bad.

So better to just plug my ears and say, la la la la. But I recently took a job working with Vox, a site that's been covering this technology basically since it started. On top of that, last year, Vox Media, Vox's parent company, announced they're partnering with OpenAI. Meaning, I'm not totally sure what it means. But if I was ever going to have to grapple with AI and its place in my life...

It's here, now, at Vox. So I'll start with a simple question. How did some people come to believe that we should fear an AI apocalypse? Should I be afraid? This is Good Robot, a series about AI from Unexplainable in collaboration with Future Perfect. I'm Julia Longoria. ♪

Ever wonder what your lashes are destined for? The cards have spoken. Maybelline New York Mascara does it all. Whether you crave fully fan lashes with lash sensational, big bold volume from the colossal, a dramatic lift with falsies lash lift, or natural looking volume from great lash, your perfect lash future awaits. Manifest your best mascara today. Shop Maybelline New York and discover your lash destiny. Shop now at Walmart.

Hey folks, it's Mark Maron from WTF. It's spring, a time of renewal, of rebirth, of reintroducing yourself to your fitness goals. And Peloton has what you need to get started. You can take a variety of on-demand and live classes that last anywhere from 10 minutes to an hour. There are

thousands of Peloton members whose lives were changed by taking charge of their fitness routines. Now you can be one of them. Spring into action right now. Find your push. Find your power with Peloton at onepeloton.com.

Here we go.

When I first started reporting on the idea of an AI apocalypse, and if we should be worried about it, my first stop was the Bay Area for the Rationalist Conference. But I also stopped by the house of a colleague nearby. Hi, Kelsey. How are you doing? Good. How was your flight? Oh, it was actually... Vox is largely a remote workplace. So it was one of those body dysmorphic experiences to meet Kelsey Piper in 3D.

Taller than she looks on Google Meets. I am a writer for Vox's Future Perfect, which is the Vox section that's about undercovered issues that might be a really big deal in the world. We were joined by her seven-month-old. As she was saying, Vox's Future Perfect is about... Undercovered issues that might be a really big deal in the world. Kelsey's thought that AI technology would be a really big deal in the world long before this AI moment we're all living.

She's been thinking about AI since she was a kid.

when she first found the rationalist community online. Oh, I was in high school. I was 15, bored, academic over-performer with a very long list of extracurriculars that would look good to colleges down the road. And in my free time, I read a lot of Harry Potter fan fiction, as, you know, 15-year-olds back in 2010 did. One of the most popular Harry Potter fan fictions was called Harry Potter and the Methods of Rationality.

Harry Potter and the Methods of Rationality by Eliezer Yudkowsky. Eliezer was influenced by a lot of early sci-fi authors. Eliezer, as he's known to the rats, is the founding father of rationalism, king of thought experiments. Back in 2010, he started publishing a serialized Harry Potter fanfic over the course of years.

It's since inspired several audiobook versions. And a version acted out by The Sims. It, too, was a thought experiment. What if Harry Potter were parented differently?

The initial promise is just that Harry Potter, instead of having abusive parents, has nerdy parents who teach him about science. So his aunt and uncle are actually... Are nice people, yeah. Harry, I do love you. Always remember that. And in this version, Harry Potter's superpowers turn out not to be courage and magic, but math and logic. What Eliezer calls the methods of rationality.

So Harry Potter has a quest to do what exactly? You know, fix all of the bad things in the world. And the combination of being incredibly naive and also in some sense incredibly respectable, I think as a teenager that's super appealing and fun. Where you're like, why would I limit myself to only solving one of the problems? While there are any problems, I'm not done. We've got to fix everything.

The idea that every problem should be thought about, every problem could be fixed, that was appealing to his readers, including 15-year-old Kelsey. She wanted to read more, so she found her way to Eliezer's blog. Eliezer was pretty openly like, I wrote this to see if it would get people into my blog, Less Wrong, where I write about other issues. So the question is, please tell us a little about your brain.

On his blog, called Less Wrong, he applies the methods of rationality, math, and logic to all kinds of topics. Like child rearing. Religious.

My parents, they're modern Orthodox Jews, always avoiding the real weak points of their beliefs. It had stuff about atheism, a lot of stuff about psychology, biases, experiments that showed that depending how you ask the question, you get very different answers from people. Because the idea is that you're supposed to, by reading the blog and participating, learn how to be authentic.

less wrong. I do it by stories and parables that illustrate it. Like the default state is that we're all very confused about many things and you're trying to do a little bit better. Interesting. So it's kind of like trying to sort of, I don't know,

Like work out the bugs in the human brain system to optimize prediction. Yeah. And a ton of the people involved are computer programmers. And I think that's very much how they saw it. Like the human brain has all these bugs. You go in and you learn about all of these. You learn to correct for them. And then once you've corrected for them, you'll be a better thinker and better at doing whatever it is you set out to do.

The biggest human brain bug Eliezer wanted to address was how people thought about AI, how he himself used to think about AI. His very first blog post, as far as I can tell, was in 1996 when he was just 17. And in a very 17 kind of way, he writes about his frustrations.

And the way to end this, he thought back then, was to build a super intelligent AI, a good robot that could save the world.

But at around 20 years old, while researching how to build it, he became convinced building super intelligent robots would almost certainly go badly. It would be really hard to stop them once they were on a bad path. I mean, ultimately, if you push these things far enough without knowing what you're doing, sooner or later you're going to open up the black box that contains the black swan surprise from hell. And at first, he was sending these warnings into the void of the vast internet.

So the question is, do I feel lonely often? That's, I often feel isolated to some degree, but writing less wrong has, I think, helped a good deal.

The way I tend to think about Eliezer Yudkowsky as a writer is that he has a certain angle on the world, which can be like a real breath of fresh air. Like, oh, there's someone else who cares about this. You know, you can feel very seen for the first time. Is that how you felt? Oh, yeah, yeah. You have a good heart and you are certainly trying to do the right thing. But it's very difficult sometimes to figure out what that is.

That pursuit of being less wrong, doing the right thing in the right way, brought many kindred spirits together on the blog. Actually, several of my housemates posted on Less Wrong back in the day. This is how I met a bunch of the people I live with. They were people whose blogs I read back when I was a high school student. Wow, that's kind of wild, right? Yeah.

Many Less Wrong bloggers and readers like Kelsey were inspired to move to the Bay Area to join a pretty unusual community, IRL. And the weekend I visited, hundreds of rationalists from around the world gathered in the Bay to reason things out together for a festival conference thing called Less Online.

Many rationalists I met there found the community the way Kelsey did. A friend of mine at math camp introduced me to Harry Potter and the Methods of Rationality. The post, it was written in all caps saying, oh my God, I've just read the most amazing book in my life. You have to read it right now. Linking to fanfiction.net.

Others found Eliezer on his blog. I mean, this event exists in very large part because of that series of blog posts. That series of blog posts has become known by the community as The Sequences. It includes the paperclip maximizer thought experiment. Eliezer Yudkowsky helped come up with the idea, intending to warn people of the danger of an AI apocalypse.

And at least here, it seems to have worked. I definitely think AI is the largest kind of existential risk that humanity faces right now. I, the normie, wanted to try to take this threat beyond quirky hypotheticals to something more concrete. And can you walk me through, like, how could that happen? Like, how could an AI...

It's really hard to say how it will happen if it does. Yeah. It's a little easier to say ways that it might happen and to kind of provide various examples to, like, just generate intuitions for why this might be. But any time I pressed a rationalist on it, they gave me yet another series of thought experiments. Kind of the way it might happen is analogous to how a 21st century army might defeat an 11th century army. Which...

I guess, might be the only way to try and describe a threat from a technology that's really still in its infancy. For rationalists first introduced into this world, like 15-year-old Kelsey, these thought experiments were convincing. AI, to her, was a really big deal. It was just like, whoa, all this is like really cool and exciting and interesting. And I tried to convince my friends that it was cool and exciting and interesting.

I asked 30-year-old Kelsey to break it down for me without thought experiments. So I think Eliezer sort of had two big claims in zooming out a lot. Claim number one, we will build an AI that's smarter than humans and it will change the world.

And then claim number two. Things are likely to go wrong.

Well, if you're familiar with all the issues of AI and all the issues of rationality and you're willing to work for a not overwhelmingly high salary. Eliezer helped inspire a new career path and a new field was born, trying to make sure we develop superintelligence safely. One way to make sure it went safely was to try and actually build it.

And as investment in that field began to grow, the community of believers in a someday super-intelligent AI experienced a schism. I think a lot of the people who were persuaded by Eliezer's first claim that AI is a really big deal were not necessarily so persuaded by his second claim that you have to be very, very careful or you're going to do something catastrophically bad.

What the beginning of a so-called catastrophe looks like after the break. If you're a parent or share a fridge with someone, Instacart is about to make grocery shopping so much easier. Because with family carts, you can share a cart with your partner and each add the items you want. Since between the two of you, odds are you'll both remember everything you need.

And this way, you'll never have to eat milkless cereal again. So, minimize the stress of the weekly shop with Family Carts. Download the Instacart app and get delivery in as fast as 30 minutes. Plus, enjoy $0 delivery fees on your first three orders. Service fees apply for three orders in 14 days. Excludes restaurants.

I don't know about you, but the number one thing I look forward to when I return from traveling is a good night's sleep in my own bed. That has never been more true than it is now that I have a Sleep Number smart bed. I get so sore after traveling on planes, but after literally one night in my Sleep Number smart bed, my body feels restored, rested, and relaxed. The fact that my bed actually listens to my body and adjusts to my needs to keep me sleeping soundly all the way through the night is worth it alone.

to mention my husband and I never need to argue over firmness because we can each dial in our own sleep number setting. Why choose a Sleep Number smart bed? So you can choose your ideal comfort on either side. And now, for a limited time, Sleep Number smart beds start at $849.

Prices higher in Alaska and Hawaii. Exclusively at a Sleep Number store near you. See store or sleepnumber.com for details. Aging is a natural process as we all know, and I for one don't mind embracing it. But I will tell you one part of aging that I don't care for. It's the symptoms that

stem from changing hormones, especially as you get older to perimenopause and menopause. That's why we want to tell you about Happy Mammoth's Hormone Harmony. Happy Mammoth, the company that created Hormone Harmony, is dedicated to making women's lives easier. And that means using only science-backed ingredients that have been proven to work for women. They make no compromise when it comes to quality, and it shows. For a limited time, you can get 15% off on your entire first order at HappyMammoth.com. Just use the code HAPPYME at checkout.

When the paperclip maximizer meme first started circulating in the 2000s, our best example of a paperclip AI was Clippy, the animated little guy on Word with the eyeballs, Microsoft's AI office assistant. Back in the day, I remember it couldn't even tell you if you should use their, their, or their in a sentence. People weren't so much afraid of Clippy as they were annoyed with him.

There are a remarkable number of think pieces from those years slamming Klippy. The consensus was, no one asked for this. This is dumb. So when Eliezer Yudkowsky warned about the dangers of a super intelligent AI that could someday destroy humanity, it was hard for a lot of people to take him seriously. The state of thought in 2010 was something like,

Yeah, AI may as well be a century away. Future perfect writer Kelsey Piper again. So if you are Eliezer Yudkowsky, you have a bit of a dilemma, right? You want to make two arguments. One is super intelligent AI is possible. Building a robot that's smarter, faster, and more creative than humans at most things is possible. Clippy, be damned.

And he needed to make that first argument before he could make his next one. The second argument you want to make is we need to not do it until we have solved the challenge of how to do it right. For a long time, both arguments, super AI is possible, but let's not for now, were dead in the water. Because AI tech was just not that impressive. But by 2014...

Eliezer noticed that people outside his corner of the blogosphere had started to pay attention. AI is probably the single biggest item in the near term that's likely to affect humanity. Tesla chief executive and billionaire Elon Musk, who started this year sitting prominently in President Trump's White House,

had tweeted, quote, we need to be super careful with AI, potentially more dangerous than nukes. It's about minimizing the risk of existential harm. It seems like Elon Musk is a reader of Eliezer's blog. He famously met his ex, the musician Grimes, when they joked on then Twitter about a very obscure thought experiment from the blog. I will spare you the details. ♪

The point is, Elon Musk read the paperclip maximizer thought experiment, and he seemed convinced AI was a threat. It's very important that we have the advent of AI in a good way. And that's, you know, the reason that we created OpenAI. Elon Musk co-created OpenAI. You might have heard he left and then tried to buy it back. But if you haven't heard of OpenAI, you've probably come across its most popular product.

ChatGPT. I was surprised to learn that Eliezer Yudkowsky was in fact the original inspiration for the ChatGPT company, according to its co-founder, Sam Altman. Sam Altman has in fact said this on Twitter, that he said that he credits Eliezer for the fact that he started OpenAI. Co-founder Sam Altman specifically tweeted that Yudkowsky might win a Nobel Peace Prize for his writings on AI.

That he's done more to accelerate progress on building an artificial general intelligence than anyone else. Now, in saying this, he was kind of being a little cruel, right? Because Eliezer thinks that open AI is on track to cause enormous catastrophe. Co-founders Sam Altman and Elon Musk bought Eliezer's first claim. That superintelligence is possible, and it's possible in our lifetimes.

But they miss the part about how you're not supposed to build it yet. For this sort of most important technological milestone in human history, I view that as right around the corner. That's Sam Altman talking about superintelligence. Like, it's coming soon enough, and it's a big enough deal, that I think we need to think right now about how we want this deployed, how everyone gets a benefit from it, how we're going to govern it, how we're going to make it safe and sort of good for humanity. Human values, which are difficult to encode.

It's still not clear to me what superintelligence actually is. I won't be the first one to observe that it has some religious vibes to it. The name makes it sound like it's an all-knowing entity. The CEO of OpenAI's competitor, Anthropic, said he wanted to build, quote, machines of loving grace. Sam Altman was asked on Joe Rogan's podcast about whether he's attempting to build God machines.

I guess it comes down to maybe a definitional disagreement about what you mean by it becomes a god. I think whatever we create will still be subject to the laws of physics in this universe. Sam Altman has called this superintelligence, quote, the magic intelligence in the sky, which, I don't know, sounds a lot like how some people talk about God to me.

How exactly this supposed super intelligence will be smarter, faster, and more intelligent than us, on what scale, is unclear. But for all the hype around ChatGPT, I only recently learned what the heck it is.

It's what they call a large language model. At its most fundamental level, a language model is an AI system that is trained to predict what comes next in a sentence. I'm oversimplifying here, but the very basic idea of a language model is to generate language based on probabilities. So if I have a word or a set of words,

What's the most likely next word? So if a sentence starts with, "On Monday I went to the grocery," the next word is probably "store." The way the model guesses that "store" is probably next is based on how you train the language model. Training involves feeding the model a large body of text so it can detect patterns in that text and then go generate language based on those patterns.

Early versions of spellcheck, like Clippy, were language models trained on the dictionary. Useful, but only for a very specific task. Like to tell you if you put the E in the word weird in the wrong place, or the H's in the word rhythm. Clippy couldn't tell you if you should use there, there, or there in a sentence because it wasn't trained on enough text to be able to guess the right word in context. The dictionary can't tell you that.

But OpenAI's products were very different from Clippy. A revolution was happening in AI tech that made language models look less like a simple spellcheck and more like the human brain, detecting patterns and storing them in a network of neurons. Technologists trained those neural networks through a process they called deep learning. They trained the AI on a lot of data, close to the entire internet.

Thanks to Vox Media's partnership with OpenAI, we know they're likely training the language model on this podcast. The words I'm saying right now. No one had ever trained an AI on the entire internet before, at least in part because of how expensive it is. It takes a ton of energy and compute power.

But OpenAI, founded by a billionaire, raised the funds to make an attempt at the biggest, baddest, largest language model the world had ever seen.

They started going, okay, what if the secret to trying to build super intelligent god AI or whatever is just to spend more money and have more neurons and to have more connections, feed it more data? What if that's all there is? What if you can build something that is more intelligent than any human who's ever lived just by doing that?

One of their earlier attempts before ChatGPT was GPT-2 in 2019. You could similarly give it a specific task, like design a luxury men's perfume ad for the London Underground. Make it witty and concise. The London Underground is a great place to advertise. It's a great place to get your message across.

It's a great place to get your product noticed. Look out, madmen. GPT-2 was not exactly coming for copywriter jobs. But for people like Kelsey, who were watching the technology closely... I was like, wow, this is like miles beyond what AI chatbots were capable of last week. This is huge. GPT-2, the language prediction machine, was showing some real promise.

She wasn't alone in that feeling. Investors like Microsoft poured millions more dollars into the next few models, which were bigger and bigger. Be the scent that turns heads. And a couple years later, OpenAI released ChatGPT. Visual, a captivating image of the perfume bottle surrounded by vibrant city lights symbolizing the urban lifestyle. Embrace the city.

Most people weren't paying any attention to AI. And so for them, it was like a huge change in what they understood AI to do. Chat GPT was the first time that normies like me even thought about AI in any real way. All I wanted to do was fix my email. I did not expect to have a minor existential crisis about how much the world is about to change.

And this is only proving that one day, AI will take over human intelligence. I spent about two hours just typing back and forth with this AI chatbot, and it got pretty weird. The AI confessed to loving Kevin and tried to convince him to leave his wife.

People at OpenAI or competitors were saying like, yeah, the plan is to build super intelligence. We think we're going to do it by 2027. People were like, okay, startup hype. For some reason, everybody who runs a startup feels the need to say that they're going to build God and the human race. And then after ChatGPT was genuinely impressive, people started taking them a bit more seriously. And

A lot of those people were nervous. People weren't so nervous about ChatGPT, but what ChatGPT represented, the way they got the language model to sound so much smarter so quickly, wasn't through intricate code. They just made the model bigger.

Which suggested to some people that the path to building God, or whatever, was through brute force. Spending more and more money to build a bigger and bigger machine. So big, we didn't really understand why it did what it did.

We can't point to a line of code to say this is why the robot got so much better at writing a perfume ad. And if we someday do build something that's smarter than us, whatever that means, we won't be able to understand why it's smarter than us. The trouble with this, it seems to me, is that AI will come for copywriter jobs. It could come for all our jobs.

But rationalists I spoke to say that's nothing compared to the bigger trouble ahead, a potential apocalypse. But I do also kind of think that it is a very important priority for me to have the best possible time in the next five to ten years and just to do the very best I can to squeeze the joy out of life while it is here. Do you have an example of that? No.

One I can talk about on a podcast? I mean, yes, I joke, but I'm pretty involved in the kink community, and that's very important to me. Many rationalists I spoke to live in polyamorous communities because they believe monogamy is irrational.

Some aren't sure if it's rational to have children, given the high probability of things going very, very wrong because of AI. What's my P-Doom, as our community says? P-Doom. It's a shorthand I heard at the conference, meaning probability of doom. It's a phrase that gets thrown around at this conference. People will literally go up to you and go, so what's your P-Doom? And it's a shorthand for what is the probability that humanity doesn't make it in the long term.

And this is a mathy bunch, so they get specific. I guess the answer I usually give is something like over 50%. I mean, I think it's like somewhere around 80, 90. Eliezer Yudkowsky's P-Doom is very high. I've read it's over 95% these days. But then I've seen him tweet that P-Doom is beside the point. I spotted Eliezer Yudkowsky pretty much the moment I stepped into the conference.

He was hard to miss. He was the one wearing a gold sparkly top hat all weekend. I was the one who was clearly lost, carrying a big furry microphone for three days, trying to get people to talk to me. It wasn't until day three of the conference that I mustered the determination to approach Eliezer for an interview. Determination was necessary because he was always surrounded by a cluster of people, a cluster of mostly dudes, listening to him speak.

I asked him if it would be okay if I pulled out my microphone. Everyone has been looking at this like it's a weapon. It is. It is, I know. Over the last few years, Eliezer and the rationalists have gotten some bad press. Some rationalists express their frustration at journalists who focus on the polyamory that happens in the community. Some critics of rationalism, to put it crudely, call them a sex cult.

And then there's the unsavory things people associated with the community have said. One philosopher who helped popularize the paperclip maximizer, Nick Bostrom, once wrote that he thought Black people were less intelligent than white people. He has since apologized. But critics highlight this comment and the mostly white demographics of the rationalist community to question their beliefs.

I never really know why anyone agrees to talk to me, but can you introduce yourself? I'm Eliezer Yudkowsky. This event is probably more my fault than the fault of anyone else around. And can you describe your outfit right now? I'm currently wearing a sparkly multicolored shirt and a sparkly golden hat. You can probably hear it in my voice. I was nervous to talk to him.

He's known for being a bit argumentative, very annoyed with journalists and with the world more generally, for not being smart enough to understand him, for not heeding his warnings. I don't know. How would you summarize what you want the world to know?

in terms of AI? The world is completely botching the job of entering into the issue of machine superintelligence. There's not a simple fix to it. If anyone anywhere builds it under anything remotely like the current regime, everyone will die. This is bad. We should not do it. Do you feel gratified at all to see that your ideas entered the mainstream conversation? Do you feel like they have? The circumstances under which they have entered the mainstream conversation are catastrophic.

And if I was the sort of person who was deeply attached to the validation of seeing other people agree with me, I would have picked a much less disagreeable topic. I was here to try to not have things go terribly. They're currently going terribly. I did not get the thing I wanted.

Eliezer has been on a bit of a press tour, giving interviews and TED Talks, saying OpenAI is on track to cause catastrophe.

So it's a funny thing because I have one position of deep sympathy with Eliezer. If you become convinced that this is a huge problem, it makes perfect sense to go on a writing tour trying to explain this to people. And also, I think it's kind of predictable that a lot of people heard this and went, oh, AI is going to be really powerful. I don't think you're right about the thing where that's a problem. I want the powerful, important thing.

And some people seized on it and were like, because this is powerful and important, we should like invest now. And I feel kind of sad about this. I can understand why Eliezer was hesitant to talk to me. His message to the world has been totally lost in translation. In his mind, it's backfired.

Even at his own conference, there were attendees who worked for places like OpenAI, the companies building the supposed death machine he was afraid of.

He thought that our best chance of building a super intelligent AI that did what we wanted and didn't like, you know, seize power from humans was to build one that was very well understood. One that sort of from the ground up, we knew why it made all the decisions that it made. Large language models are just the exact opposite of that.

I will say, even after talking to Eliezer and Kelsey and a bunch of rationalists, it's still hard to imagine how something like ChatGPT or Google's AI, which once told someone to add glue to stick cheese on pizza, is going to become the invention of all inventions and possibly catastrophic. But I can understand how building something big that you don't understand is

is a scary idea. The best AI metaphor I came across for my brain was not about paper clips. It was by a non-rationalist writer. A guy named Brian Christian describes that training AI is something that could go wrong in the way parenting a kid can go wrong. Like, there's a little kid playing with a broom. She cleans up a dirty floor. And her dad, looking at what she's done on her own, says, "'Great job! You swept that really well!'

This little girl, without skipping a beat, might dump the dirt back on the floor and sweep it up again, waiting for that same praise. That's not what her dad meant for her to do. It's hard to get the goals right in teaching a kid to be good. It's even harder to teach good goals to a non-human robot.

It strikes me as like almost like a parenting problem. I ran this parenting metaphor by Kelsey with her seven-month-old on her lap. I think there's some serious similarities. And I do with my kids struggle with trying to steer something that you don't have perfect control over and that you wouldn't even want to have perfect control over, but where it could go extremely badly to like just let the dice fall where they may. If we just let the dice fall where they may.

Rationalists say we could have an apocalypse on our hands. And they say it won't be one we saw coming. It won't be a Hollywood-style Terminator situation. It probably won't have paperclips either. They don't pretend to know exactly how apocalypse could befall us. Just that it'll probably be something we haven't even imagined yet. But I have trouble getting caught in what could happen when it feels like haven't bad things already started to happen thanks to AI?

AI is not hypothetical anymore. It's arrived in our lives. I'm not kept up at night about a hypothetical apocalypse. I find myself asking now questions. Questions like, what is OpenAI doing with my voice right now?

Is there anything to do about problems with AI short of the annihilation of humanity? It sounds very exciting. You know, like if I were a big science fiction geek, I would be so into that. Not all technologists seized on Eliezer Yudkowsky's claims. What is he even talking about? This is like word salad. Like this doesn't even make sense.

One group of technologists didn't actually seize on any of his claims. There's one thing to have the conversation as a thought experiment. It's another thing when that kind of thought experimentation sucks up all of the money and the resources. The more I dig into the AI world, the more I see disagreement.

between technologists. I do worry about the ways in which AI can kill us, but I think about the ways in which AI can kill us slowly. They've been called the AI ethicists, and they say we've been paying attention to all of the wrong things. That's next time.

Good Robot was hosted by Julia Longoria and produced by me, Gabrielle Burbay. Sound design, mixing, and original score by David Herman. Fact-checking by Caitlin Penzemug. Editing by Diane Hodson and Catherine Wells. Special thanks to Future Perfect founder Dylan Matthews, to Vox's executive editor Albert Ventura, and to Tom Chivers, whose book The Rationalist's Guide to the Galaxy was an early inspiration for this episode.

If you want to dig deeper into what you've heard, head to vox.com slash good robot to read more future perfect stories trying to make sense of artificial intelligence. Thanks for listening.

Ever wonder what your lashes are destined for? The cards have spoken. Maybelline New York Mascara does it all. Whether you crave fully fan lashes with lash sensational, big bold volume from the colossal, a dramatic lift with falsies lash lift, or natural looking volume from great lash, your perfect lash future awaits. Manifest your best mascara today. Shop Maybelline New York and discover your lash destiny. Shop now at Walmart.

Hey folks, it's Mark Maron from WTF. It's spring, a time of renewal, of rebirth, of reintroducing yourself to your fitness goals. And Peloton has what you need to get started. You can take a variety of on-demand and live classes that last anywhere from 10 minutes to an hour. There are

thousands of Peloton members whose lives were changed by taking charge of their fitness routines. Now you can be one of them. Spring into action right now. Find your push. Find your power with Peloton at onepeloton.com.

If you wear glasses, you know how hard it is to find the perfect pair. But step into a Warby Parker store and you'll see it doesn't have to be. Not only will you find a great selection of frames, you'll also meet helpful advisors and friendly optometrists. Yep, many Warby Parker locations also offer eye exams. So the next time you need glasses, sunglasses, contact lenses, or a new prescription, you can find them at Warby Parker.

You know where to look. To find a Warby Parker store near you or to book an eye exam, head over to warbyparker.com slash retail.