We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Good Robot #3: Let’s fix everything

Good Robot #3: Let’s fix everything

2025/3/19
logo of podcast Unexplainable

Unexplainable

AI Chapters Transcript
Chapters
The episode introduces the 'Good Robot' series, recounts Peter Singer's famous drowning child parable, and explores its influence on moral philosophy and effective altruism.
  • Peter Singer is a philosopher known for his provocative ideas and moral philosophy.
  • Singer's drowning child parable challenges individuals to consider moral obligations beyond immediate surroundings.
  • The parable inspired the effective altruism movement, focusing on maximizing good through rational decision-making.

Shownotes Transcript

This episode is brought to you by Indeed. When your computer breaks, you don't wait for it to magically start working again. You fix the problem. So why wait to hire the people your company desperately needs? Use Indeed's sponsored jobs to hire top talent fast. And even better, you only pay for results. There's no need to wait. Speed up your hiring with a $75 sponsored job credit at indeed.com slash podcast. Terms and conditions apply.

At Sierra, discover great deals on top brand workout gear, like high-quality walking shoes, which might lead to another discovery. 40,000 steps, baby! Who's on top now, Karen? You've taken the office step challenge a step too far. Don't worry, though. Sierra also has yoga gear. It might be a good place to find your zen. Discover top brands at unexpectedly low prices. Sierra, let's get moving.

It's Unexplainable. I'm Noam Hassenfeld. And this week, the third episode of our series, Good Robot. If you haven't heard the first two, they should be right behind us in your feed. So just scroll back, find a comfy spot to listen, and meet us right here when you're done. Once you're all prepped and ready for me to stop talking, here is episode three of Good Robot from Julia Longoria. Okay. On your way to work, you pass a small pond.

Children sometimes play in the pond, which is only about knee deep. The weather's cool though, and it's early. So you're surprised to see a child splashing about in the pond. As you get closer, you see that it is a very young child, just a toddler, who's flailing about, unable to stay upright or walk out of the pond. You look for the parents or babysitter, but there's no one else around. The child is unable to keep her head above the water for more than a few seconds at a time. If you don't wade in and pull her out, she seems likely to drown.

What should you do?

Breakfast will be ready shortly. So we don't have to go anywhere first before I check in? This past spring, I joined hundreds of people on a sort of pilgrimage to Princeton, New Jersey, to honor a man whose ideas touch people's lives in pretty profound ways. And for people who somehow haven't come across his work, how would you describe his influence? Oh, it was life-changing.

The man in question is philosopher Peter Singer. People gather to celebrate his retirement from Princeton, where he taught moral philosophy for over two decades. And he's no average professor. He got standing ovations. Like, who gets, what philosophers get that? I mean, he's in the news. There are protests. He's what you might call a provocative thinker.

And he's become a bit philosopher famous, like a modern-day Socrates or Nietzsche, spreading his ideas far beyond the philosophy world. His writing helped inspire a TV show. Hello, everyone, and welcome to your first day in the afterlife. The Good Place, starring Ted Danson and Kristen Bell. He's known for pushing people to think about how they can do the most good in the world. Got me to be vegan.

Really? Absolutely. A lot of people at his retirement party had been inspired to give up eating meat based on his writing about the moral cost of animal suffering. I'm looking at the vegetarian and thinking there's Peter's nature. Which is why the food at his retirement conference was an assortment of vegan delights.

Avocado toasts, broccoli. It is very gassy foods all around. People came to this three-day event in his honor from all walks of life, from all over the globe, Malaysia and China and Minnesota. I spoke to local politicians, a writer, a track coach. How would you describe his influence in the world? Powerful because he has planted seeds that grow and expand.

Coach told me he buys used paperbacks of Singer's books in bulk. I still like to carry around paper books, especially like for travel. And anytime he travels, he leaves a copy in his hotel night table for the next person to find. Yeah, really? Yeah, like Tanzania and Guatemala. Kind of like you'd find a Bible. Yeah, so you just leave them there. Wow. And the Bible vibes are appropriate. Singer poses provocative moral questions through parables.

His most famous one is the drowning child thought experiment. Some people at his retirement party knew it by heart. Just imagine if you walk past a pond, you're wearing nice shoes, nice suit, and then a child was drowning. What is the right thing to do in that scenario? Initially, the answer seems clear. Obviously, I'm going to rescue the child. But Peter Singer asks you to take the thought experiment further.

What if the child isn't right in front of you? What's the real significant difference between someone, you know, in a pond right next to you versus someone across the world? Assuming that it takes the same effort to save a life, no matter the distance, we should save them. Well, I was looking for something that would persuade people. That's Peter Singer. The current issue then was the crisis in what's now Bangladesh. And some people were saying, well, you know...

I didn't cause this, it's not my responsibility, it's someone's responsibility over there. And I was trying to think of a way of convincing people that it's still wrong not to try and prevent some great harm occurring even if you have no responsibility for it. People talk to me about reading this parable for the first time almost like a conversion moment.

inspiring them to help the drowning children oceans away from them. I read it and I immediately gave to several charities that I've been thinking about. And some people started to take the idea even further. How far should they go to save these proverbial children? You know, would there be any luxuries left in our lives if we took this seriously? What if there were drowning children we didn't know about? What if those children didn't even exist yet? What about time?

not just across physical space, but across time. This is a riddle that we can never quite resolve. This riddle started a movement. Peter Singer's Drowning Child produces the effective altruist movement. Effective altruism aims to use reason and evidence to do the most good possible. A moral movement rooted in rationality.

which some rationalists found themselves gravitating toward. Because for me, the underlying impulse has always been, let's fix everything.

The drowning child became a catalyst that changed the way some of the wealthiest people in the world spent their millions to fix everything. I think AI is one of the biggest threats, but I think we can aspire to guide it in a direction that's beneficial to humanity. And number one on the list of things to fix, saving the world from AI apocalypse. ♪

This is episode three of Good Robot, a series about AI from Unexplainable in collaboration with Future Perfect. I'm Julia Longoria.

Support for Unexplainable comes from 1Password. If you're a security or IT professional, you probably got a mountain of stuff to protect. You got devices, applications, employee identities, not to mention all that spooky stuff outside your security stack, like unmanaged devices, shadow IT apps, non-employee identities. Spooky.

Fortunately, there's 1Password Extended Access Management. 1Password says millions of users and more than 150,000 businesses trust their award-winning password manager. But they secure more than just passwords. 1Password says their Extended Access Management secures your company without leaving your employees behind.

That means they block unsecured and unknown devices before they access your company's apps. With regular third-party audits, 1Password exceeds the standards set by various authorities and is a leader in security. You can go to 1password.com slash unexplainable to secure every app, device, and identity, even the unmanaged ones. Right now, listeners can get a free two-week trial at 1password.com slash unexplainable. That's 1password.com slash unexplainable.

Today at T-Mobile, I'm joined by a special co-anchor. What up, everybody? It's your boy, Big Snoop Dio Double G. Snoop, where can people go to find great deals? Head to T-Mobile.com and get four iPhone 16s with Apple Intelligence on us, plus four lines for $25. That's quite a deal, Snoop. And when you switch to T-Mobile, you can save versus the other big guys, comparable plans plus streaming. Respect. When we up out of here...

See how you can save on wireless and streaming versus the other big guys at T-Mobile.com slash switch. Apple intelligence requires iOS 18.1 or later.

Make your next move with American Express Business Platinum. Earn five times membership rewards points on flights and prepaid hotels booked on amextravel.com. And with a welcome offer of 150,000 points after you spend $20,000 on purchases on the card within your first three months of membership, your business can soar to new heights. Terms apply. Learn more at americanexpress.com slash business dash platinum. Amex Business Platinum. Built for business by American Express.

The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. You might say a version of this story starts with a pond. I don't know why I thought of the drowning child. I mean, maybe I was in Oxford and a lot of the college grounds, like where I was trying to have lunch, had these shallow ponds in them, ornamental ponds.

Peter Singer was inspired by ponds at Oxford University, where he was working. Maybe that's what put it in my head, but I can't really say for sure. And his pond story passed from one Australian philosopher at Oxford to another Australian philosopher at Oxford. I'm having trouble thinking of exactly where he's imagining.

there's certainly rivers. Okay, maybe it wasn't a pond, maybe it was a river. Who cares? The point is, the drowning child didn't gain legs, so to speak, until this Australian philosopher entered the picture. Hi, I'm Toby Ord. I'm a philosopher at Oxford University. As a grad student at Oxford, he was assigned to write an essay about the ideas in the riddle. Who is the drowning child he needed to save?

This got him thinking. I came to think, actually, we probably do have these duties to help people who are much poorer than ourselves, even if it requires really quite substantial sacrifices. That's when the drowning child seemed to go from brainy thought experiment to a moral imperative.

At the time, Toby had a modest academic salary, but he was inspired to give away 10%, like a 10% tithe in the religious world, to charity. It was only after really sitting with it for a couple of years, actually, that I really made a decision to try to take this idea further. He thought, what if I got someone else to give 10% of their income, and then that person got another person to give 10%?

Before long, we're saving exponentially more drowning children. So ultimately, I launched an organization just after I turned 30 in 2009 with Will McCaskill to try to encourage other people to make a similar choice. We started with 23 members. Pretty soon, Toby's group of givers tripled and then quadrupled.

Peter Singer himself joined in giving and spreading the good word. More and more people are understanding this idea. Here he is giving a TED talk in 2013. And the result is a growing movement, effective altruism. They gave the movement a name.

Effective altruism was really trying to take two insights: that saving a life is a really big deal, that's the first one, and that saving a hundred lives is a hundred times bigger deal. The idea at the heart of effective altruism was a contrarian one at the time. You used to get mail from charities tugging at your heartstrings with a photo of a poor kid you could save.

Effective altruism was saying we can't rely on warm fuzzies alone to make choices about how to do good. It's important because it combines both the heart and the head. The heart, of course, you felt. You felt the empathy for that child. But it's really important to use the head as well. So is this a very analytical approach to how to do good in the world?

Fox writer Kelsey Piper first heard about effective altruism, or PA as it's sometimes called, in high school. You might already notice some parallels to what the rationalists found appealing. The rationalists, the niche internet community Kelsey also found in high school, led by idiosyncratic blogger and AI researcher Eliezer Yudkowsky. There was pretty early on a ton of overlap in people who found the effective altruist worldview compelling and people who were rationalists compelling.

Probably because of a shared fondness for thought experiments. Thought experiments with, like, pretty big real-world implications, which you then proceed to take very seriously. Effective altruism chapters started sprouting up on college campuses across the U.S. and the U.K.,

It's not surprising that's when I first heard whispers of the movement in college. A time when how can I do the most good in the world is a very live, soul-crushing question. I

It appeals to young people. I think there's something about being in college. It really feels like you can do anything. People are a lot more open to, I'm going to radically rethink everything I'm doing with my life. Going off to college, Kelsey was sold. And I was like, yes, I want to start a chapter. And I got on a Zoom call, I think, with some organizer who was a few years older than me, who was like, here's what you do. Let me tell you a little secret. I say that I'm an effective altruist.

That just means a person trying to be effective at altruism. This is an online video called "Introduction to EA." A student leader stands in front of a blackboard, trying to recruit students to Berkeley's EA chapter. And effective altruists understand that choosing from our heart is unfair. So if we can't choose from our heart, we need some kind of framework to choose the best cause. The approach had three prongs.

Number one: Choose tractable causes. Ones you can actually solve in a measurable way. Now. Next: Choose neglected causes that a million people aren't already trying to solve. Neglected causes are going to look like bad causes.

They're going to look weird. They're going to look like fiction. That's why they're neglected. And finally, choose important causes. Importance is the product of scale and severity. Toby Ord came up with a calculation for importance. If I could save 10,000 lives instead of a single life, this was extremely important. Important causes saved more drowning children.

There's an idea of a quality-adjusted life year, trying to set up a kind of universal way of thinking about health. So if you could extend someone's life in full health by a year, that would be one quali. So you were sort of taking this like seemingly amorphous and overwhelming problem of like poverty around the world and health problems around the world and sort of making it concrete with math, in a sense. Is that right?

Yeah. It's actually very, very crucial to do the math. By 2020, effective altruism's mathematical approach had real-world implications beyond college campuses. It had taken the philanthropy world by storm. This scientific approach to charitable giving and work is on the rise. It's being used by some of today's class of billionaire philanthropists. Billionaire Bill Gates of Microsoft? No.

Incidentally, the company that brought you Clippy. And billionaire Elon Musk, who would become the co-founder of OpenAI. They all got on board with EA's mathematical approach.

EA Groups vetted charities and recommended effective ones these billionaires went on to give to. In this way, EA became a sort of a check on charities. You say you do good, but how much good? You know, donating to charity isn't about the warm glow in our hearts of doing good. It's about the fact that there is a kid who is dying of malaria, and if you donate some money, you can save their life, and then they won't be dead.

The math has led effective altruists to spend a lot of money trying to cure malaria. Shipping malaria nuts to Africa is not exactly the most innovative or provocative thing to do, but according to the EA calculus, which wanted to put hard numbers on outcomes, it was an effective choice. And when Kelsey was deciding how to do the most good in her life...

She got interested in putting numbers to journalism. Future Perfect was sort of coming at stuff from that angle, and I found that really compelling. She went to work for Vox's Future Perfect. I learned Future Perfect was initially founded in an attempt to apply the EA rubric to journalism, aiming to cover issues that were important, tractable, and neglected.

EAs also applied that mathematical rubric to answer another question, the biggest question that plagued me as a young 20-something just starting out in the world. Their idea was, in your career, you have 80,000 hours. Spend them on something really important to you, something that will make a big difference. The question of, what should I do with my life?

I got involved in the EA community in college and it's been a really big part of how I've decided what to do with my life. That voice you just heard is crypto billionaire Sam Bankman-Fried. At one point, he was EA's biggest poster child, being interviewed about the movement on the news. I am curious too because you are an effective altruist. I assume that you still are. And you very publicly adopted the role of earning to give. Yeah.

Sam Beckman Freed went into crypto in order to earn a crap ton of money so he could give it away. The logic was, if you choose to be a crypto billionaire instead of, say, an aid worker, your fortune could hire a whole army of aid workers. You might be a more effective altruist that way.

Sam Bankman-Fried famously gave his money to causes like pandemic prevention, to artificial intelligence, and to journalistic outlets like Future Perfect and ProPublica.

And then the news came out that this effective altruist had committed some serious crimes. The whole fiasco cracked a bit of the mathy idealism at the heart of effective altruism.

In the case of Sam Bankman-Fried, some made-up math helped him pull off one of the biggest financial frauds in U.S. history, costing some of his victims their life savings. A challenge about being a very new small movement is that, yeah, you're going to be defined by whoever the most prominent person is. And if the most prominent person is Crypto Fraud Guy, then you've got a problem. Kelsey Piper was one of the first journalists to interview Sam Bankman-Fried in the aftermath of

Her writing for Future Perfect was cited in his sentencing document. Future Perfect stopped using the money they got from Sam Bankman-Fried's philanthropic arm. Vox Media says they're waiting for a restitution fund to give the money to victims. And on a personal level, it was particularly hard on people like Kelsey. No, it was definitely upsetting.

I listened to Taylor Swift's Antihero on repeat for like three days. Didn't do much else. For her, effective altruism had become more than just a guide for charitable donations or what to do with her career. It had become a way of life. Am I hand-washing or being interviewed? Both, I think. I think there's authenticity added by the planking bitches in the background. Okay.

When I interviewed Kelsey at her home in the Bay Area, I met several of her housemates. I'm Clara. I live in a weird Bay group house. Many of them found each other along the pipeline from rationalism to effective altruism. Can you please stay out of the kitchen right now? I'm trying to cook. I want the kitchen free of kiddos because kiddos are distracting to cooks.

Here in the Bay Area, they cook together, raise kids together. They live in communal group houses to save money to be able to donate to effective causes. This interconnected way they live out their values has prompted criticisms that it's a little culty. The idea is in community with one another, they push each other to be more rational. The lines between rationalism and effective altruism begin to blur in the Bay.

I have always thought of myself as more centrally a member of the EA community than the Rationalist community, once there was an EA community. While I was in town, Kelsey invited me and several other out-of-towners she didn't know very well to her Shabbat dinner, where they prayed and sang. We're going to have a kiss. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do it. Let's do

A lot of people make the comparison to a religion, and I think that's pretty fair. A lot of what a church offers people is the combination of a unifying philosophy. There are certain promises you have in common, and the support of a community of people who care about you, know you personally, are willing to give you a hand, help you get a job, all of that. And I think the people who are like, it's a cult, are basically mistaken for

But it's a religion, I'll kind of cop to that one. Most of the world's recorded religions have developed ideas about how the world ends. What humanity needs to do to prepare for some kind of final judgment.

In the Bay Area and on Oxford's campus, effective altruists started to hear from rationalists they were in community with about what an apocalypse could look like. And of course, since a lot of rationalists thought that AI was the highest stakes issue of our time, they started trying to pitch, you know, people in the effective altruist movement, like, look, getting AI right is a major priority for charitable giving. In the early days of EA...

Effective altruism founder Toby Ord came across rationalist Eliezer Yudkowsky's blogs, where he warned of an AI apocalypse. I thought that his arguments were pretty good. Do you have a P-Doom currently?

Yeah, so it's funny. I actually, I think these uses of the word doom actually are a bit misguided. P. Doom, the rationalist shorthand for probability of doom from an AI apocalypse. Toby's not into it. Because doom means that it's kind of a foregone conclusion. To be doomed means there's 100% probability that you will die.

So I think it can make people feel powerless, whereas I think that these things are very much in our control. How effective altruism set out to save us from an AI apocalypse. After the break. Business taxes. We're stressing about all the time and all the money you spent on your taxes. This is my bill?

Now Business Taxes is a TurboTax small business expert who does your taxes for you and offers year-round advice at no additional cost so you can keep more money in your business. Now this is taxes. Intuit TurboTax. Get an expert now on TurboTax.com slash business. Only available with TurboTax Live Full Service.

The PC gave us computing power at home, the internet connected us, and mobile let us do it pretty much anywhere. Now generative AI lets us communicate with technology in our own language, using our own senses. But figuring it all out when you're living through it is a totally different story. Welcome to Leading the Shift.

a new podcast from Microsoft Azure. I'm your host, Susan Etlinger. In each episode, leaders will share what they're learning to help you navigate all this change with confidence. Please join us. Listen and subscribe wherever you get your podcasts. ♪

This week on The Verge Cast, we have questions about smartphones. Questions like, why isn't Siri better? And where is the better Siri that Apple has been promising for a long time? Questions like, why are all of our smartphones kind of boring now? And why is it that all of the interesting ideas about how smartphones could look or how they could work or what they could do for you happening in countries like China and not in the United States? We

We have answers, and we have some thoughts, and we also have a lot of feelings about what a smartphone is actually supposed to be in our lives. All that and much more, much, much more on The Verge Cast, wherever you get podcasts. Computer, is there a replacement beryllium sphere on board? Negative. The people at Universal Dynamics have programmed us to put our targets at ease so as to more efficiently facilitate their collection. Thank you. Thank you.

At the Shabbat dinner, I was seated next to a 22-year-old who was very curious about my big furry microphone. So, um, can you introduce yourself? Um, yeah. So, uh, I'm Tom. I, uh... It was kind of funny. So it sort of felt like I...

I was an effective altruist rather than, like, being convinced to be one. I had a teacher who assigned me to read Peter Singer. Tom read Peter Singer's Drowning Child as a kid. I, like, decided at some point in high school that I would dedicate my life to trying to do as much good as possible. As far as how to do as much good as possible, Tom told me he recently made a big decision about that. I, uh...

dropped out of Harvard a year ago in my junior year and I'm now working as an ML scientist at an AI hardware startup. We should get out of these people's house because it's past bedtime. At that point we had to take our conversation outside.

You dropped out of Harvard, so are you... How many years out of that are you? One year. I actually would have graduated on Thursday if I had stayed in. Wow, okay. So you're like around 22-ish? Exactly. It was like such a hard...

decision. Like, no matter... A decision of, like, what you're going to do with your life? Yeah, because no matter what you're doing, you're just, like, abandoning a ton of people. Fundamentally, I feel like the world is in a state of triage. In a state of what? Triage. Like, there's so much going wrong that needs to be fixed. If I'm, like, working on providing malaria nets in Africa, I'm in some sense abandoning, like, all the starving children in India. Um...

I can only really focus on one place. What is the place that most urgently needs my help? And, you know, I figured that it's like probably the future and like specifically the problems in the future which might be created by AI. I've heard of kids Tom's age dropping out of school to go into the Peace Corps or like doing some kind of religious mission. But going to work for an AI lab to save the future...

wasn't intuitive to me. So why choose this route that you're on right now? Obviously, it's not. You're young. You have your whole life ahead of you. But why choose this route? You know, I don't just care about the people and animals that are alive today. I care deeply about future generations. I think about our children and our children's children and so on and so forth. Our children's children. And so...

Tom told me he's trying to save future drowning children from AI apocalypse. His P-Doom? Compared to rationalist Eliezer Yudkowsky's, which is off the charts, Tom's is 10%. My view is that we should treat AI as being very likely to be the biggest thing ever and treat the coming decades as likely the most important decades in human history.

This line, "the most important decades in human history," it's roughly a quote from a book by Effective Altruist founder Toby Ort. Our present moment is just a very tiny slice of this much longer story of humanity. Around the time Toby was building the Effective Altruist movement, trying to maximize the number of drowning children he could save, rationalists joining the movement were trying to convince Toby that AI should be a top Effective Altruist priority.

He wasn't convinced by their paperclip maximizer thought experiment. But he was convinced by the idea that AI could threaten the story of humanity. This idea about existential risk. What convinced him was, in a way, his own argument. The math of it all.

That preventing an AI catastrophe could save not just the drowning children today, but tens of thousands of future generations. Trillions of drowning children. There's been about 10,000 generations of humanity so far, and it seems very plausible that there could be 10,000 or more generations to follow us.

Toby came to believe we're living at a crucial moment in human history, where up until now, humans have ruled the Earth. Why is it humanity that's calling the shots on the Earth and not butterflies or ravens or chimpanzees? We've ruled because of our smarts. You know, something to do with our brains, not to do with our brawn. What if we weren't the smartest beings on Earth anymore?

I think ultimately the most compelling overall argument to me is that if you survey researchers on AI... He says AI researchers were telling him that possibility of a super intelligence smarter than us was around the corner. Within, say, the next 30 years or so,

that they think that that's about as likely as not. And, you know, how would we still be calling the shots? How would we not be perhaps subservient to these new systems? Toby wrote a book laying out these arguments. He named it "The Precipice" for the cliff he sees humanity sitting on at this moment in history. What's particularly pernicious about these existential risks is that they're something that if our generation drops the ball, there won't be any more generations.

He thinks we can determine the long-term survival of our species. He gives this philosophy yet another ism: long-termism. These strong arguments that these risks were real slowly made people think, "Well, if I want to work to focus on that, what should I be doing? What charities should I be donating to?" So effective altruists wouldn't just give their charity dollars to things like ending malaria.

They'd also give to charity to prevent an AI apocalypse. And the way to avoid bad robots taking over the world, some people decided, was to use EA money to build a good robot. And not just good, super intelligent. A magic intelligence in the sky. You might remember those are the words of the ChatGPT company's founder, Sam Altman.

He started his nonprofit, OpenAI, with EA charity dollars from the group Open Philanthropy. Charity dollars also went to OpenAI's competitor, Anthropic, whose CEO also wants to build a good robot. Or as he put it, a machine of loving grace.

And it wasn't just EA Charity dollars that went toward this cause. Some of you may have noticed that a bunch of people in this community seem to think that AI is a big deal. AI also became a common career path for young, effective altruists. Eventually, the winter of sophomore year, I remember just like thinking through it and thinking like, "Oh, oh wait, yeah, I don't think there's really a way I don't go into AI somehow."

22-year-old Tom went into AI with the Council of Harvard's Effective Altruism Group. And in the course of my reporting, I met many other young people. I'm like, wow, damn, this AI safety thing. Crap, like, do I need to work on it? Like, what can I do? Who, around the time ChatGPT came out, decided the most effective career at doing good in the world is going into AI safety. And I'm like, damn, I think I can actually move the needle on this. They did the math.

and thought saving future children from AI apocalypse was neglected, tractable, and important. Open Philanthropy told us over 410 million EA dollars have gone toward addressing risks from advanced AI, making up 12% of their total giving, roughly the same percentage that's gone toward malaria prevention.

I guess I would say if I try to work out my best guess of the most important issues of our time, I think AI risk is probably very high at the top. Wow. So it's number one above the current drowning children. You'd put it above the problems we face in the present? I think I would, sadly. I've been struggling with the math of it all. I can see how it's important to think about our long-term future.

But no matter how many math problems EA people put in front of me, I have a hard time seeing how saving trillions of future children from AI apocalypse is the most important tractable problem of our time. How does a movement built around helping in a measurable way with things like malaria nets turn to a cause that requires you to almost predict the future? It's almost like a religion or something where it requires faith.

that good things will come without those good things being clearly specified. This is the criticism of ethicists like Dr. Margaret Mitchell. To them, the solvable, tractable problems are the harms AI is doing right now. Problems like bias, surveillance, environmental harms. But instead, funding often goes toward addressing future hypothetical harms.

Or it goes toward building a superintelligence, something many ethicists don't think we should be building at all. It seems to be like funding for sort of like fanciful ideas. There's one follower of long-termism who's found his way to the White House. This was no ordinary victory. This was a fork in the road of human civilization. It is thanks to you that the future of civilization is assured.

And we're going to take Doge to Mars. It was hard not to chuckle when billionaire Elon Musk talked about his goal of colonizing Mars after President Trump's inauguration. But he's not joking. When he started SpaceX, he intended it to be an insurance policy for humanity in case apocalypse strikes. It's also why he says he went into AI.

It's all to protect the future drowning children. You know, I think this is actually fundamentally important for ensuring the long-term survival of life as we know it, to be a multi-planet species. The long legs of the drowning child thought experiment have taken us very far away from its original intent to

of trying to get us to care about a crisis in Bangladesh. Oh, the drowning child in the pond has certainly developed a life of its own, yeah. In what way? What do you mean? I went back to the author of the drowning child thought experiment, Peter Singer, at his retirement party. I hope that I've left a legacy in my writings, that they will lead people to think differently about what we owe people in extreme poverty and other parts of the world. That's what the drowning child in the shallow pond was supposed to suggest. But

Interpreting the parable to mean that the biggest issue of our time is saving future children from an AI apocalypse? I think there's been too much focus. I'm not dismissing it. I think it's good that there are some people thinking about that and working on it. But compared to some of the other problems that are around...

I have the sense that people like it because it's a kind of nerdy problem that's, you know, interesting things to think about. So I think that's why it gets more attention. You might know that Peter Singer himself is no stranger to, shall we say, outlandish interpretations. Using his own utilitarian philosophy, he's argued that severely disabled children add suffering to the world.

And it might be justifiable in maximizing happiness for the parents to euthanize them. So yeah, from where I sit, using math to maximize is not always the answer. If you stare a little too hard at the numbers, the humans begin to fade out of focus. When I think about the drowning child as it relates to AI, I don't think about the math. I've been thinking about something else.

While reporting this story, the news broke that a 14-year-old boy in Florida named Sewell had killed himself. He'd become obsessed with a chatbot. And his last text to the bot, just before he died, showed that he believed ending his life on Earth would bring him closer to the bot. That's who I think of when I think of the drowning child. Abstracting that parable so far ahead in space and time

we risk losing sight of the drowning child right in front of us. In an attempt to save some future hypothetical children, some people in the AI industry have set out to build a good robot, a super intelligent AI, a magic intelligence in the sky, a machine of loving grace. But in selling that story and building those still very flawed systems to maybe save some future children,

They've invented an industry that's creating new ponds for children to drown in today. I wanted to pose all of this to Kelsey Piper after she pulled the Play-Doh away from her baby.

Kelsey was the person who was my introduction to the worlds of rationalism and effective altruism. I asked her, why ignore AI harms today for the sake of some future children?

There are lots of people who have this impression that you need long-termism or theorizing about the badness of humanity going extinct or, you know, drowning child-based philosophy to care about this. And I don't think you need any of that. She kindly hinted that maybe I've fallen down a bit of a philosophical rabbit hole.

caught in the intellectual debates of it all and lost sight of the actual technology we were supposed to be talking about. So one thing I did, which was super valuable, is I tried to form an opinion that wasn't about all of the social melodrama thing on the top of the AI scene. Like, just play with the AI models and think about, like, what can they do? What kinds of things, if they could do them, do I think that would be concerning? Now, that's an interesting thing to do.

Producer Gabrielle Burbe's ears perked up at this idea. She knows I am a sucker for social melodrama. And maybe the social melodrama has been my crutch to avoid having to form my own opinion and actually having to use the technology that does scare me. So what I want you to do first is I want you to open up ChatGPT. And I want you to say,

I'm going to give you three episodes of a series in order. I'm going to give you three episodes of a series. Next time on Good Robot, we feed ourselves to the machine. Good Robot was hosted by Julia Longoria and produced by Gabrielle Burbey. Sound design, mixing, and original music by me, David Herman. Our fact checker is Caitlin Penzi Moog. Our editors are Diane Hodson and Catherine Wells.

Special thanks to Larissa McFarker, whose book Strangers Drowning was an early inspiration for this episode. And a quick note, unexplainable host Noam Hassenfeld's brother is a board member at OpenPhil, but he isn't involved in any of their grant decisions. Noam played no role in the reporting of the series. If you want to dig deeper into what you've heard, head to vox.com slash good robot to read more future perfect stories about the future of AI. Thanks for listening.