The legal, political and emotional implications of describing the Israel-Hamas conflict as genocide.
From WNYC in New York, this is On The Media. I'm Brooke Gladstone. And I'm Michael Ellinger. Also on this week's show, do we have to pay attention to the latest Silicon Valley soap opera? For many, the phrase OpenAI has fired Sam Altman is like the phrase Zendaya is Michi or Bibme is now part of Chegg. A collection of words that carries no emotional or semantic weight.
Plus, the niche philosophy that's shaped the hearts and minds of AI researchers. People who are followers of EA, they play different kinds of games. There's one that we encountered. It's a variant of chess called Bug House. It's all coming up after this.
On the Media is brought to you by ZBiotics. Tired of wasting a day on the couch because of a few drinks the night before? ZBiotics pre-alcohol probiotic is here to help. ZBiotics is the world's first genetically engineered probiotic, invented by scientists to feel like your normal self the morning after drinking.
ZBiotics breaks down the byproduct of alcohol, which is responsible for rough mornings after. Go to zbiotics.com slash OTM to get 15% off your first order when you use OTM at checkout. ZBiotics is backed with 100% money-back guarantee, so if you're unsatisfied for any reason, they'll refund your money no questions asked.
That's zbiotics.com slash OTM and use the code OTM at checkout for 15% off. This episode is brought to you by Progressive Insurance.
What if comparing car insurance rates was as easy as putting on your favorite podcast? With Progressive, it is. Just visit the Progressive website to quote with all the coverages you want. You'll see Progressive's direct rate. Then their tool will provide options from other companies so you can compare. All you need to do is choose the rate and coverage you like. Quote today at Progressive.com to join the over 28 million drivers who trust Progressive.
Progressive Casualty Insurance Company and Affiliates. Comparison rates not available in all states or situations. Prices vary based on how you buy. ♪
I'm Maria Konnikova. And I'm Nate Silver. And our new podcast, Risky Business, is a show about making better decisions. We're both journalists whom we light as poker players, and that's the lens we're going to use to approach this entire show. We're going to be discussing everything from high-stakes poker to personal questions. Like whether I should call a plumber or fix my shower myself. And of course, we'll be talking about the election, too. Listen to Risky Business wherever you get your podcasts.
From WNYC in New York, this is On The Media. I'm Brooke Gladstone. And I'm Michael Loewinger. At time of recording, the hostage swap ceasefire in Israel and Palestine has ended. Because, says Israel, Hamas violated the truce by firing rockets. Because, says Hamas, Israel began moving troops after refusing to release more prisoners. Because, says Israel, Hamas was still withholding women after promising not to.
And so it goes. — Once again, explosions are rocking Gaza and giant plumes of smoke are rising over that skyline. — And the growing outrage over a soaring death toll. Almost 15,000 killed, mostly civilians, in less than two months, according to the Hamas-controlled Ministry of Health in Gaza. — Meanwhile, the war for public opinion rages unabated as government officials, celebrities, academics and advocates reach for the heaviest words they can muster.
Hamas will pay for its crimes against humanity. Hamas will pay for slaughtering our people. War crimes do not constitute and are not an appropriate response for other war crimes. Pope Francis is facing backlash over his remarks at the Vatican. And he said the conflict had gone beyond war to become what he calls terrorism. The city of Richmond has become the first city in the country...
to adopt a resolution condemning the nation of Israel, accusing it of ethnic cleansing in Gaza during its war with Hamas. The conversations in the press and social media have slowly become dominated by one word:
The crime of all crimes. That's a man confronting CNN's Sarah Sidner reporting in the West Bank on October 20th, a time when genocide, to describe the civilian deaths in Gaza, was still a rare word on the network's airwaves.
But as the death toll grew, so too have the protests, in part aimed at pushing the term into media coverage. New York Times Magazine's award-winning writer Jasmine Hughes resigned after signing an open letter condemning Israel's genocide in Gaza.
The move constituted a violation of newsroom policy. The director of the New York high office of the high commissioner for human rights resigning over the UN's response to the situation in Gaza, which he described as a quote textbook case of genocide. The G word, yes, genocide. And it's that word, that term, that explosive accusation that we can't shy away from examining.
And look, it's not a word we should take lightly either. It's one of the worst crimes known to humankind. It's what the Jewish people were subjected to in the Holocaust.
The memory of the Holocaust, as MSNBC's Mehdi Hassan acknowledged last month, may help explain why members of the news media were initially reluctant to apply genocide to Israel's actions. Now the word is everywhere. In opinion pages, primetime chat shows, and protest chants, and not just to describe the destruction in Gaza.
Hamas attack on October 7 is probably a genocide. You can't ignore Hamas' genocidal tendencies, especially towards the state of Israel. They've been very open and very clear about it since they were formed in 1988, in which their charter calls for the obliteration of the state of Israel. Hamas wants to kill Jews and drink their blood out of a boot. Louisiana Republican John Kennedy speaking Wednesday on the Senate floor. President Biden needs to show the world
that the United States of America will continue to stand with Israel until Hamas' genocidal agenda is abandoned. Genocide Joe has got to go! Genocide Joe has got to go!
"Protesters here in DC and New York, across the country, they've settled on a nickname for the president. They've been calling him 'Genocide Joe.' Do you have a response from the White House?" "We're not worried about nicknames and bumper stickers." John Kirby, a spokesperson for the National Security Council, speaking at the White House in November. "But this word genocide is getting thrown around in a pretty inappropriate way by lots of different folks."
Beyond its cultural and political implications, genocide has legal ones with strict criteria observed by international courts. Ernesto Verdeja is the executive director of the Institute for the Study of Genocide and a professor of peace studies and global politics at the University of Notre Dame.
He says to understand whether genocide applies to violence in Palestine or Israel, we need to begin with its origins. The term genocide itself actually has its origins in the 1940s when a jurist named Raphael Lemkin coined the term to try to really get at a particular type of mass atrocity, which is the targeting of civilian groups for their own destruction.
And genocide gets developed in a United Nations convention on the prevention and punishment of the crime of genocide in the early 1950s. And how does the United Nations define it exactly? As any of the following acts committed with the intent to destroy in whole or in part a national, ethnic, racial, or religious group. Civilians have to be targeted based on one of those characteristics.
And there are a whole set of different actions that can qualify. Killing members of the group, causing serious bodily or mental harm to members of the group, inflicting on the group conditions of life calculated to bring about its physical destruction,
as well as imposing measures intended to prevent births within the group. So basically preventing the reproduction of the group over time. And then finally, the forcible transfer of children from one group to another. The idea being here that over time, this destroys the integrity of the group as such. The challenge is the intention part. How do you prove intention?
It's not enough to say that they carried out mass killings, that they created conditions bringing about the destruction of the group, such as famine or et cetera, but rather that through those actions, they intended to bring about a genocidal outcome. I will add that for many scholars who are social scientists like me and many other analysts, we actually find that definition of intention to be too constraining, too restrictive, and in ongoing atrocities, too hard to prove.
Well, can you give an example of something that you feel should have met the high bar of genocide but could not clear it because of this definition? So one example is what many scholars consider to be the genocide in Darfur, Sudan.
There was a special investigative committee put together by the UN. The United Nations investigated it. They had a group of lawyers, very high profile and very respected lawyers, who investigated the violence carried out against the civilian populations in Darfur, committed by the government and their proxy forces. And they found that there wasn't a satisfactory evidence of the intent to destroy the civilian groups as such.
However, many social scientists essentially find that what happened in Sudan against the Darfur populations was effectively genocide. And the U.S. government effectively took that position too, right? It wasn't just the wonks. Correct. But again, one would have to prove that the leadership of Sudan intended to carry out the destruction of the civilian population in whole or in part. And their argument was that there was no evident plan
If I may add one point to that, genocide is not simply a thing that one identifies. One day it's not genocide and one day it is genocide. Instead, genocide tends to emerge under conditions of enormous insecurity when perpetrators feel that their prior policies have been unsatisfactory to achieve their goals. So they amp up the violence, they expand the targeting, they continue to dehumanize opponent civilian populations, and they treat them as an existential threat.
genocide is a process. It emerges often over time. And the reason why that matters is because it's rare to find a pre-existing plan of extermination and then later it's implemented. Instead, it kind of happens in stages. I just want to ask you outright, do you think that the Israeli Defense Forces' military action in Gaza could be legally deemed genocide? I believe Israel is on the brink of genocide, what they're doing in Gaza.
Proving it in a court of law would be difficult because it's difficult to see whether what we're seeing carried out right now reflects something like an increasing coordinated plan to destroy a part of the Gazan Palestinian civilian population as such. Again, if we were to use the legal definition here.
I think there's no doubt that the IDF, the Israeli Defense Forces, have carried out widespread war crimes and crimes against humanity. I think it is very difficult to take seriously the IDF's claim that,
of considering proportionality. Wait, what do you mean by that? They're willing to accept a high number of civilian deaths and mutilations of Gazans in order to achieve narrow military aims. A lot of it hinges on the indiscriminate use of bombing and airstrikes, the fact that they put Gaza under siege,
restricting access to food, to water, to electricity, medical aid, that they've used weapons which are banned in international law. Like white phosphorus? Like white phosphorus, exactly. When Israel bombs densely populated parts of Gaza, they often say, well, Hamas uses civilians as human shields. This has been their justification.
That's correct, but that is not a sufficient justification to kill very large numbers of civilians. You've said that it would be hard to prove intention, but members of the news media and other genocide experts, including a prolific scholar of the Holocaust, Omer Bartov, have pointed to a series of public statements from Prime Minister Benjamin Netanyahu, members of his cabinet, and members of the Israeli parliament,
On October 28th, Netanyahu referred to the Old Testament, Deuteronomy, when he said, you may remember what Amalek did to you. That's right. You must remember what Amalek did to you. Referring to a line in the Bible justifying the slaughtering of men and women, infants and sucklings.
On October 9th, Israeli defense minister said, we are fighting human animals and we are acting accordingly. And the next day, as Bartov has written about, the Israeli army's coordinator of government activities in the territory, a major, addressed the population of Gaza in Arabic saying, human animals must be treated as such. There will be no electricity and no water. There will be only destruction. You wanted hell. You will get hell.
It's my understanding that what would make this hard to prove as genocide in international court is whether this all qualifies as intent. Why, in your mind, would this not qualify as intent? Let me kind of take a step back here. I want to emphasize that for the vast majority of scholars working on this topic and following this case—
The debate is actually quite narrow. The debate is over whether genocide is happening or whether it's about to happen. It's very, very narrow. It's an issue of kind of parsing out these different statements as starting to coalesce into a behavior that seems like a concerted set of plans of the destruction of the civilian population in whole or in part.
Now, obviously, some people are outliers and they'll say IDF is not carrying out genocide. That's not even on the table. This is offensive, etc. But behaviorally and also a lot of the language coming out of the leadership and members of government, it's clearly pointing in that direction. The threat of genocide has been invoked by both
Israel and by Hamas. For instance, if we go back to October 7th, the day that Hamas killed some 1,200 Israelis, many of them civilians, that was seen as a heinous action stemming from Hamas's stated goals to commit genocide on Israeli Jews. So under the framework that you've laid out, how should we define that day? The October 7th assaults are war crimes and crimes against humanity. I think that would be recognized as such in a court.
Hamas has often used genocidal discourse, especially its leaders. They don't draw distinctions between the state of Israel and Jews. Often it's about talking about Jewish people as essentially enemies, as threats. It's a kind of a genocidal discourse. Hamas has used genocidal discourse before the attacks on October 7th
At least some scholars refer to them sometimes as genocidal massacres. That's language that goes back to the 1980s, roughly, from an earlier pioneering scholar of genocide studies. This idea means massacres that themselves have a kind of a logic and intentionality of genocide built into them. But Hamas, for instance, is not capable of carrying out a sustained widespread genocidal campaign against Israel. That doesn't
Let Hamas leaders off the hook for using genocidal discourse. It just means it doesn't have the capacity to do so What is rarely see as the existential threat of Hamas is not simply that it has vowed to continue these attacks It's that the group seems to want Iran Hezbollah and other Iranian-backed groups to kind of jump in and add firepower to this war and that this is considered to be part of the looming threat that Hamas poses
There's two ways of thinking about this, right? One is to say, well, because of that logic, that's a justification for the continued...
attacks in Gaza on civilian populations, if that's what it takes to destroy Hamas, then that's what you have to do. Another way to read it, though, I think is a little bit different. And that is to say, well, if Israel continues to carry this out, groups like Hamas and other extremist groups will continue to be emboldened and will continue to be supported. And effectively, what you have is a spiral of violence. It's not going to solve the security problem. This is what we, as political scientists, actually often refer to as a security dilemma.
A problem is something that has a solution to it. A dilemma is something where an action may seem rational on its own terms, but the outcome itself is highly irrational. So the rational thing to do is arm yourself more and use more violence to destroy the enemy. Well, the enemy is using the exact same logic, right? And it's being supported by external actors, whether it's the US or Iran or whomever.
You come out with a dilemma because you come out with an outcome of just increased insecurity, more and more killing, more and more maiming, more and more suffering. So I tend to see this kind of logic not as something that serves as a justification for massive violence, but rather as a dead end. And this is precisely why we need another logic to replace it.
The Anti-Defamation League has stated that charges of genocide or ethnic cleansing or cultural genocide do not apply to the massive death tolls in Gaza, and that even invoking those terms cheapens them, and it does a disservice to the memory of the Holocaust, the Armenian genocide, the Rwandan genocide. What do you make of that position? I would say that it's...
it does not cheapen those histories. We are obligated to remember the Holocaust and the Armenian genocide and the Rwandan genocide and all of these other awful cases. But we also should not create hierarchies of victims. We should not say that, well, some civilians should be remembered and other civilian deaths and catastrophes and genocides, they don't have the same moral status. I think that's wrong. I don't think it cheapens it. If we take seriously the
Ethics and morality, if we take seriously the idea of human rights as universal rights, we have to call out these atrocities wherever they happen and whoever commits them.
I do want to emphasize, and this is an important point, there's an enormous amount of antisemitism floating around in these debates, right? There's absolutely no doubt about that. And this is what makes the political discourse, and this is why the media are so important. It's necessary to have spaces where we can disentangle these different types of charges and these different types of claims so we can really condemn antisemitism when it's happening, but also realize that Gazan civilian deaths, that needs to be recognized, right?
The Anti-Defamation League's perspective on this is that a disproportionate amount of scrutiny is placed on Israel as opposed to other humanitarian crises that are taking place. And if I'm reading into it, that's what makes it anti-Semitic, that the actions of a Jewish state are held to a different standard.
Well, I think the actions of a Jewish state should be held to the standards of international law, right? I think it's correct that there's an enormous amount of attention placed on Israel. I think that's absolutely right. I would make two further points. One is that condemning the behavior of the Israeli state
is not categorically condemning all Jews. And we know that because over this past calendar year, we've seen massive mobilization against the Israeli state by Israeli Jews, right? So I would push back on this idea that condemning the behavior of the Israeli state and the Israeli armed forces somehow is by definition, categorically a condemnation of Jews. I think that's wrong.
But more generally, we don't pay enough attention to atrocities committed in other countries. Who among us still talks about the treatment of the Rohingya in Myanmar? Well, only people who focus on that. Who among us focuses on the treatment of people in Tigray, in Ethiopia? We don't talk about it. We don't talk about Yemen anymore. We don't talk about Central African Republic anymore.
I think it's absolutely right. We should be focusing on these other cases. Powerful leaders in the West don't focus on them because they're not of geostrategic interest. That's what it comes down to.
What role does the media play here? On November 9th, there was an open letter published. Some 1,500 members of the media signed it advocating for news professionals to start using terms like genocide. The way we see it now in the press is quoted from experts' papers.
like you, from advocates. It is not a normative statement. It is not used plainly in news copy. Should the press use it? I think the press should absolutely play a central role in talking about what the limitations are with a legal definition. Part of the reason why we see so many arguments and debates and you see people not using the term sometimes or in other times using the term
is because the legal definition is so deficient. And you have different groups talking at cross purposes. You have legal specialists using the definition of genocide, which is very different from what activists and advocates might use, which is different from what policy analysts might use, which is different from what scholars might use, and et cetera, on and on and on. We end up getting caught in a particular logic of debate, which is,
I call something genocide, and if you don't agree with me, and if you say it's not really genocide, it's this, this, and this, then automatically you become a genocide denier. Or if you use the term genocide, then somehow you are an enabler of terrorism. You've called it a Rorschach test. Perhaps I should have said a litmus test in some respect. It's a Rorschach test in the sense that you see in it whatever you want.
And it's a litmus test in the sense that if one doesn't agree with one's own evaluation or assessment, then one clearly is a defender of terrible atrocities. I feel that the focus on whether what is happening is genocide or not is
draws attention away from how to think through of ending the violence and addressing the root causes of the violence. Ernesto, thank you very much. Thank you very much for having me. It's been a pleasure. Ernesto Verdeja is an associate professor of peace studies and global politics at the University of Notre Dame.
Coming up, what does the boardroom drama over Sam Altman and OpenAI actually mean for the future of the human race? Anyone? Anyone? This is On The Media. This episode is brought to you by Progressive. Most of you aren't just listening right now. You're driving, cleaning, and even exercising. But what if you could be saving money by switching to Progressive? Driving is a great way to save money.
Drivers who save by switching save nearly $750 on average, and auto customers qualify for an average of seven discounts. Multitask right now. Quote today at Progressive.com. Progressive Casualty Insurance Company and Affiliates. National average 12-month savings of $744 by new customers surveyed who saved with Progressive between June 2022 and May 2023. Potential savings will vary. Discounts not available in all states and situations. ♪
I'm Maria Konnikova. And I'm Nate Silver. And our new podcast, Risky Business, is a show about making better decisions. We're both journalists whom we light as poker players, and that's the lens we're going to use to approach this entire show. We're going to be discussing everything from high-stakes poker to personal questions. Like whether I should call a plumber or fix my shower myself. And of course, we'll be talking about the election, too. Listen to Risky Business wherever you get your podcasts.
This is On The Media. I'm Michael Lowinger. And I'm Brooke Gladstone.
Over the Thanksgiving weekend, phones of innocent bystanders started to buzz with mystifying push notifications about drama at the nonprofit-slash-startup called OpenAI. Breaking news, Sam Altman is out as CEO of OpenAI, the company just announcing a leadership transition, and it appears Altman was fired. Tensions had been building, and part of our disagreements
But the potential dangers of artificial intelligence, the board did not believe Altman was communicating honestly with them. Staff at OpenAI threatened to quit if they didn't bring him back. By Monday, Microsoft announced that they'd hired him. The CEO of OpenAI was ousted and now he's back in.
For those working in or closely watching the AI industry... The news on Friday afternoon that Sam Altman had been fired as CEO of OpenAI and his subsequent reinstatement late Tuesday night marked a world-historically shocking turn of events. Max Reed is the journalist behind the "Read Max Newsletter," reading from his column "The Interested Normies Guide to OpenAI Drama."
reverberations from which will be felt for centuries, a turning point on the order of Brutus assassinating Caesar or Low-tax exiling Moot from something awful. But for many others, the phrase "Open AI has fired Sam Altman" is like the phrase "Zendaya is Michi" or "Bib Me is now part of Chegg" or "He was in the Amazon with my mom when she was researching spiders right before she died."
That is to say, a collection of words that carries no emotional or semantic weight. An example sentence that exists to demonstrate syntactical rules. Language that rolls off the brain like water from a duck's back.
So you open with a distinction between two groups of people. One group is working or following AI closely, and they'll have found the aforementioned drama to be a world historically shocking turn of events, while the second group of people, the majority who prefer to get on with their lives, would find it utterly mystifying or altogether uninteresting. So what question
Questions about all this. Do you think that second group might be asking? I think they might start with who is Sam Altman and what is OpenAI exactly? This is a company that people who are following it or even sort of vaguely familiar with the tech industry will have heard a lot about. But it's still sort of unclear to me how much it's conquered the news feeds of regular people.
You observed that it's still capturing page one attention from The New York Times, even while there are two wars going on, at least. Yeah. I write a newsletter that's largely about tech and the future. And so I'm obviously in the first camp of people that, you know, the news that Sam Altman had been fired is huge.
And then I talked to some friends over the weekend who are themselves tech interested, but not particularly up on things. And I hadn't quite realized how in a bubble I was and how baffled they were that they had been getting push alerts on their phones from news organizations about this. And that when they went to the New York Times, the website, the top story was about drama at what is essentially a nonprofit CEO getting ousted by his board, which probably happens 10 times a day in the country.
That happened to me, too. I got a pile of alerts. You say that while the ruckus at OpenAI is juicy and kind of hilarious, it's not important. But it featured in the coverage of the actors and the writers' strike, and people are worried about their jobs going away. So why shouldn't we care? Well, at the end of the day, same guy's still there. They're still making the same products. Nothing's really changed but a few boardroom seats.
But even beyond that, OpenAI is just one of several different AI companies working on what is fundamentally pretty similar technology. And OpenAI may be the leading company in the sector, and it has the most advanced model and certainly the one that is most familiar to consumers.
But it's not clear to me that this company requires that level of microscopic focus. I mean, I think it's important to remember this is not a company that has ever run a profit. People think what it does is going to be very important for businesses in the future. But that also is resting on a set of assumptions about the ability of the researchers who are developing these models and these applications to
to advance the models at the same pace they have in the past, that the models will reach levels of accuracy that allow them to be used pretty seamlessly in business applications. It's not that I think all those predictions can't possibly come true, that they're wrong. It just is a very, to me, a very rickety set of assumptions to build a kind of top-of-the-fold, most important story of the weekend narrative out of.
You were saying that part of it is that, you know, this company is yet to make a profit. But as someone who studies tech, you know that there are applications that can have world changing consequences for years before they make a profit because the venture capital people and others are willing to float them.
Yeah, I mean, OpenAI is pretty unique in Silicon Valley and in any sector, insofar as it was founded as a nonprofit dedicated to developing artificial general intelligence, which is to say, you know, a conscious computer mind.
Conscious computer mind. Yeah, I mean, the term AI has kind of been defined down, hasn't it? So if we imagine the purest possible, the real definition of artificial intelligence, we now call that AGI, artificial general intelligence. The idea that you have a
a mind inside a computer that can think consciously, that can rationalize, that can communicate. You mean it's self-aware? Yeah, self-aware. I just want to at least get into the ballpark here. This is an instructive line of questioning because nobody has space in the first three paragraphs of a news story to really define exactly what they mean by AGI.
Nevertheless, there is a sense among certain AI researchers, it's not even clear that it's a majority, that we are maybe on the cusp of reaching AGI of some kind.
And the impetus behind OpenAI was to put money and resources behind reaching AGI first, specifically to get to the point, however you might define it, before a bad actor would do so, an enemy government, a greedy private corporation. Right. That was sort of written into the mission of this nonprofit, but it doesn't make sense to me.
That was the argument over the atom bomb. We'll get there first. Did that mean that Germany or Russia or anybody else wouldn't have it?
Yeah, you know, I have to say, I think there's a lot of, speaking of narrative, there's kind of a lot of plot holes in the story of OpenAI. Even if the nonprofit structure didn't make sense from the outside perspective, it was a really good recruiting tool for good AI researchers because they could say to themselves, here's a place where I can do really cutting edge AI research and make some money, but I'm doing it in a good way.
In your newsletter, you offered a diagram of OpenAI's corporate structure that you found hilarious. It starts with a kind of flywheel structure around a nonprofit board, and then below that is the nonprofit, and below that the capped profit company that's allowed to make some profit but not so much. And then what else is in that diagram?
You know, you have this kind of perpetual motion machine, a cap profit company that makes money that gets reinvested into a nonprofit. And you can sort of look at that and say, sure, this basically makes sense. And then you turn your head a little bit and there's one other bubble in this org chart. And inside that bubble is the name of one of the biggest, most successful and richest software companies on the planet, Microsoft.
and you begin to wonder, well, why is this company just hanging out here on the corner when there's an arrow there that just says minority owner? What does that actually mean?
And my joke was, you know, when this first all came out and it turned out that a faction on the board had ousted Sam Altman, it felt a little bit like they hadn't really been thinking about what that extra bubble on the org chart might want. But then we found out that one thing that Microsoft wanted was Sam Altman, who they promptly offered to hire.
I think it's not an accident that, according to all reporting, the board members who fired Altman called Microsoft only one minute before the meeting where they were planning on firing Altman, that they probably had a sense that Microsoft would object. And then Microsoft seemed clearly happy to provide Altman and some of the other employees who had resigned with the leverage of a job offer against his reinstatement.
This weird corporate structure and relationship to Microsoft gets at an inherent tension, you say, baked into OpenAI. The simple sort of version of this story, as it's often told in coverage of it, is that there's the Doomer camp who believe quite strongly that AI
AI and AGI are existentially dangerous, and therefore development of these technologies needs to be slowed. And then on the other side, you have so-called accelerationists who believe we have some kind of duty, whether it's to humanity or to this future super mind that we might create to damn the consequences, see what happens.
It's a fair enough description in a really broad sense of how people think about this stuff. And OpenAI is this sort of curious place because, as you can see from the mission itself, it could be seen to be both a sort of doomer organization insofar as it believes that AGI is potentially dangerous enough that we need to make sure that it doesn't fall into the wrong hands, but that it also is a sort of accelerationist organization because it is trying to create AGI fast.
You write that the crux of the story is basically that AI safety and ethics was good and all until it got in the way of investors making money. And that's also how...
some of the commentators and reports eventually put it. It was ethics versus profits, and profits won. Yeah. I mean, to me, the really interesting aspect of this is that Sam Altman is on the record as saying things about the dangers, even the existential dangers of AGI. He testified in front of Congress and basically told a bunch of senators that AGI
AGI was extremely dangerous and it was really important that it needed to be regulated. I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very clear eyed.
about what the downside case is and the work that we have to do to mitigate that. One of the fascinating things to me about the Doomer story is that it tends to be really good marketing for AI. If you go around telling people that you're developing a technology that could kill everyone, obviously that's going to scare some people, but a lot of people are also going to say, wow, that must be a really powerful technology. I bet we can make a lot of money off of that.
That narrative is really useful right up until the minute that people who genuinely believe in that narrative or in some version of that narrative are actually in charge and can say, hold on, you're going too fast. It's become clear that the Doomer narrative, such as it's put forth by people like Altman, is mostly a marketing tactic and not necessarily the deeply held belief of somebody who's afraid. In this story...
What are the stakes for us, the normie people that you wrote your column for? I would say if you have some Microsoft stock in your retirement account, then this story might mean something to you. Otherwise, I think you can safely ignore it. How should we cover the stakes of AI? I have a lot of sympathy for editors and reporters who are challenged to choose between a juicy boardroom Game of Thrones story and the kind of
drier regulatory questions and concerns. But I do see public interest journalism about the real world effects of how artificial intelligence models get used out there.
To name a specific example, obviously, many of these models that we see are filled with all kinds of biases because they're based on text and images that are scraped from the internet without a huge amount of attention paid to what sorts of bias they might reflect, which means that maybe the AI associates certain races, certain class positions, certain genders with certain negative qualities. And an insurance company buys access to one of these models as a way of trying to determine what premiums should be.
That to me is a really good example of like an actual describable risk AI represents right here. And I'm very strongly in favor of democratic control over how AI is developed and how it works, whether that means regulation, whether that means more sort of public interest in the space.
I worry a lot that the sort of fervor over doomsday existential risk is a distraction from the more prosaic, more bureaucratic risks of this kind of thing. To the extent that you can ensure that your coverage of OpenAI and your coverage of Sam Altman's position within or without the company relates to those kinds of questions and concerns, then you might at least give readers some sense of how they should be interested. That's the immediate concern. That's where we should be starting.
Max, thank you very much. Thank you for having me. Max Reed is the author of the Read Max newsletter and the column The Interested Normies Guide to AI Drama. Coming up, Sam Altman's firing highlighted the existence of a niche Silicon Valley group of true believers. This is On The Media.
This episode is brought to you by Progressive. Most of you aren't just listening right now. You're driving, cleaning, and even exercising. But what if you could be saving money by switching to Progressive?
Drivers who save by switching save nearly $750 on average, and auto customers qualify for an average of seven discounts. Multitask right now. Quote today at Progressive.com. Progressive Casualty Insurance Company and Affiliates. National average 12-month savings of $744 by new customers surveyed who saved with Progressive between June 2022 and May 2023. Potential savings will vary. Discounts not available in all states and situations.
I'm Maria Konnikova. And I'm Nate Silver. And our new podcast, Risky Business, is a show about making better decisions. We're both journalists whom we light as poker players, and that's the lens we're going to use to approach this entire show. We're going to be discussing everything from high-stakes poker to personal questions. Like whether I should call a plumber or fix my shower myself. And of course, we'll be talking about the election, too. Listen to Risky Business wherever you get your podcasts.
This is On The Media. I'm Michael Loewinger. And I'm Brooke Gladstone. For those of you who actually could and did follow the musical chairs at OpenAI, you've most likely heard of this term. Effective altruism. That's what many people believe led to Sam Altman's ouster. Some of the leadership adheres to a belief system called effective altruism. It's a pseudo-philosophical movement that has gained real traction in Silicon
Valley. You may have even heard it when, just a few months ago, its most notorious advocate, the world's reigning crypto king, went down. Effective altruism took a hit when Sam Bankman Freed, the most high-profile effective altruist and one of the movement's biggest donors, fell from grace. Sam Bankman Freed is facing seven different counts related to money laundering and fraud. He very well could spend the rest of his life in prison.
Effective altruism, long before its adoption by the denizens of Silicon Valley, began its life as a niche philosophy brewed up in the halls of Oxford University in the aughts. It posited that we had a moral obligation to donate as much money as possible to evidence-based charities working to ameliorate global poverty.
But it took wings in Silicon Valley, where under its banner, the debate over how to protect humanity from the unbounded powers of advanced general intelligence, still in its infancy, took flight. Deepa Sitharaman covers artificial intelligence for The Wall Street Journal. Some of the loudest voices in that room are believers in effective altruism. Mm-hmm.
Altman was not a believer. Right. He called it an incredibly flawed movement with very weird emergent behavior. Like what?
People who are followers of EA is what they call it. They tend to live in private group homes where they can engage in all these debates about AI and global health. They also play different kinds of games. There's one that we encountered. It's a variant of chess called Bug House.
They have their own internal lingo and patois around how they're talking about these big global AI issues. A lot of people on the outside find it kind of alienating.
You did say followers tend to live in these group homes. It's a niche movement, to be sure. But the followers live all over the world in all sorts of academic communities. It doesn't strike me that it's like some kind of weird religion.
That's right. But in the Bay Area, particularly when you're talking about within the AI community, it is clubby and it is a little bit more insular than the broader movement.
You mentioned religion. I mean, I've heard it described as a religion, a value system and a structure. There are whole streets in Berkeley where most of the houses are full of people who work in AI and also belong to this movement. It's fair to call this the Silicon Valley variant. Yeah, I think that's fair. So,
Effective altruism.org is one of the main websites promoting the philosophy, and it's posted a list of issues they want to prioritize, among them preventing the next pandemic. What else?
climate change, animal welfare. But at the top of some of these lists is existential risks posed by artificial intelligence. Because what you've seen over the last 12 months is a jump in capabilities for AI systems that a lot of these people didn't see coming.
I have met people that really believe there's an 80 to 90 percent chance that humanity is in peril within the decade. People who don't want to have kids because they think humanity is going to be so at risk that there's almost no point.
Effective altruists in the AI field believe that carefully crafted artificial intelligence systems imbued with the correct human values could lead to a golden age. So what are they most worried about?
What happens if we develop a system so smart that it decides to take control? That's one very extreme example. Another is that destruction of humanity is almost a byproduct of an instruction we give it. So you might say to a very smart AI system, let's solve climate change.
And then the AI system will realize, hey, the main problem is humans. Let's just wipe them out.
There are also a lot of other concerns around how humans might use these systems. Maybe this gets in the hands of a terrorist organization, rogue government who uses it to wreak havoc on the overall planet. I want to say though that there are a lot of critiques of these concerns, mainly that the current AI systems we have, they can't reliably add numbers. How are they going to take over the world?
So AI-induced annihilation, how that might come to pass, isn't really explained.
A lot of this is hypothesizing, and some of it is derived from research. There's a lot of people earnestly trying to understand what the harms could look like. Can you tell me a little bit more about that? It has something to do with what effective altruists call alignment. Right. Alignment basically means, can we get these systems to align to human values?
to not just understand the explicit instructions we give it, but understand what they're not supposed to do. Brooke, have you heard of the paperclip maximizer problem? No. So this is something that EA people point to as an extreme example of why alignment is so important. The idea is, is if you teach an AI system to build as many paperclips as possible and
This AI system might decide that all of humanity should be destroyed because maximizing the production of paperclips is the most important task. And anything that gets in the way is a problem. This is sort of like the sorcerer's apprentice, I guess. Right.
The paperclip is sort of like a symbol of doom in the AI community. It encompasses all the risks that could come with these AI systems, especially the unexpected ones. These AI systems need to align to human values. How do we technically get these systems to understand human intent?
One question that doesn't get answered through a lot of this research, though, is aligning to whose values. What are the values beyond humanity should be safe and things should be good? Right. Some of the research into alignment or super alignment is
was led by a scientist on the OpenAI board named Sutzkeber. He voted to fire Altman and then changed his mind. His team was working on developing an AI scientist that would conduct research into AI. In other words, into itself. Exactly. And figure out ways to align itself with human values.
This is interesting work that will yield some really important results going forward. But there is a resource disparity at some of these companies between the EA ideas and the non-EA safety ideas.
OpenAI recently hired somebody that would look at the role of OpenAI's technology in next year's election, which is a year away. But they are devoting a fifth of its computing resources, which is a big deal, to solving a problem that doesn't quite exist yet, whereas the election will. That, I think, illustrates the tension.
And you say these disagreements over effective altruism have filtered into every nook and cranny of open AI. It was a big part of Altman's ouster.
when he was fired by the board the Friday before Thanksgiving. He and his co-founder, the company's chief scientist and board member, Ilya Setskovor, had been clashing over the tension between commercializing and safeguarding OpenAI's technology.
Two of the other board members have connections to effective altruism themselves. Tasha McCauley serves on the board of an effective altruism charity called Effective Ventures. And another board member, Helen Toner, who voted to remove Altman, is an executive at a DC think tank, which is backed by open philanthropy that is dedicated to EA causes.
And in October, she published an academic paper that touted the safety practices of one of OpenAI's competitors named Anthropic.
She writes that by delaying the release of its chatbot, that, quote, Anthropic was showing its willingness to avoid exactly the kind of frantic corner cutting that the release of ChatGPT appeared to spurt. Behind the scenes, OpenAI's leadership and employees were starting to get more and more and more concerned about being painted as, quote, a bunch of effective altruists.
You know, EA is polarizing for a lot of reasons within the AI community, but
But another way that it alienates people in Silicon Valley is that it seems to slow down or cast suspicion on growth and commercialization and all these things that have underpinned the Silicon Valley ethos for two decades. Move fast and break things. It wasn't just Facebook's slogan. I mean, it was a real religion itself within the valley. But especially in tech, the market has shown to be
a very poor guardian of human values and needs. Look at everything from the toxicity of social media to global warming. All of these things, with some tweaks and forethought, could have been ameliorated. I mean, listen, I think that there's a ton of evidence for what you're saying, but within Silicon Valley, they still buy it? There's the pull of money.
Recently, Marc Andreessen just posted what he described as the techno-optimist manifesto. In it, he wrote, and I am quoting here, we believe any deceleration of AI will cost lives. Deaths that are preventable by the AI that was prevented from existing is a form of murder. Oh, God. Yeah.
I mean, this is the tension right now, right? You know, there are people on one side that say we should slow everything down and be really careful. Other people say, okay, we should be slow and we should be thinking about all the risks, not just existential risks.
And then there's this group of people that think both of those groups are complete idiots. And they are preventing humanity from embracing the cornucopia that will come with AI.
And I think a lot of the effective altruists, they're kind of caught in the middle of this debate because they become this central force within the AI community that everybody takes issue with. It's incredibly polarizing and it's incredibly powerful. So effective altruism is still extremely niche, even within Silicon Valley circles, right? Mm-hmm.
Why should most people not engaged in this philosophy care about it? If you've turned on the TV in the last year and heard somebody describe the existential risks that could stem from artificial intelligence systems or the prospect that one day humans can be to machines what animals are currently to humans.
You're listening to somebody who is engaged with EA ideas or ideas that are adjacent to the effective altruism community. The fact that we're talking about existential risks at all, that is a credit to the effective altruism community. Talking to policymakers, getting on air, writing op-eds, and they're being heard. They're in the room.
It is a niche philosophy. Nobody needs to learn how to play bug house just yet, but it hunches way above its weight in terms of influence in shaping the discourse and rules and regulation around artificial intelligence. Deepa, thank you so much. Thank you, Brooke. This has been such a pleasure. Deepa Sitharaman covers artificial intelligence for The Wall Street Journal.
That's it for this week's show. On the Media is produced by Eloise Blondio, Molly Rosen, Rebecca Clark-Calendar, and Candice Wong, with help from Sean Merchant. Our technical directors, Jennifer Munson. Our engineers were Andrew Nerviano and Brendan Dalton. Katya Rogers is our executive producer. On the Media is a production of WNYC Studios. I'm Brooke Gladstone. And I'm Michael Loewinger.