We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Who are the Zizians?

Who are the Zizians?

2025/6/30
logo of podcast On Point | Podcast

On Point | Podcast

AI Deep Dive AI Chapters Transcript
People
D
Debra Becker
M
Max Reed
S
Sonia Joseph
Topics
Debra Becker:Zizians是一个与美国各地一系列暴力死亡事件有关的类邪教组织。该组织与理性主义运动和人工智能研究领域有关联,引发了对其信仰和影响的关注。 Max Reed:我了解到Zizians的创始人Ziz,她最初参与了理性主义运动,但后来因为认为他们不够重视AI对齐问题而分道扬镳。Ziz抗议理性主义者的聚会,被捕后消失了一段时间,后来又出现在与一系列暴力事件有关的场合。理性主义在硅谷是一种自助运动,聚集了对人工智能和长期思考感兴趣的人。理性主义的特点是不受传统智慧和道德的束缚,追求抽象的哲学游戏和实验,并按照结论生活。Zizians可能将素食主义推向极端,认为为了保护动物,杀人是正当的,或只是附带损害。Ziz认为每个人大脑有两个半球,包含不同的人格,可以通过药物或剥夺睡眠来分离和赋予独立意识。Ziz认为只有少数人(包括她自己)在两个大脑中都是“好”的,即认可动物的人格。Zizians是一个小团体,许多成员已经死亡或入狱,目前没有迹象表明有新的领导者或犯罪活动。

Deep Dive

Shownotes Transcript

Translations:
中文

Now more than ever, Lowe's knows you don't just want a low price. You want the lowest price. And with our lowest price guarantee, you can count on us for competitive prices on all your home improvement projects. If you find a qualifying lower price somewhere else on the same item, we'll match it. Lowe's. We help. You save. Price match applies the same item. Current price at qualifying retailers. Exclusions and terms apply. Learn how we'll match price at Lowe's.com slash lowest price guarantee.

Support for this podcast comes from Is Business Broken? A podcast from the Mehrotra Institute at BU Questrom School of Business. A recent episode explores how the current political environment is impacting businesses on the ground. Stick around until the end of this podcast for a preview. WBUR Podcasts, Boston.

This is On Point. I'm Debra Becker, in for Magna Chakra Bharti. A cult-like group known as the Zizians is believed to be behind a string of violent deaths across the United States.

On New Year's Eve in 2022, Rita and Richard Zoschko were shot and killed in their home in suburban Philadelphia. According to court documents, a ring camera captured audio of what sounded like shouting of mom. Then, oh my God, oh my God. The couple's daughter, Michelle, was questioned in the homicide, but never charged. Michelle is linked to the Zizians. She was later arrested with the founder of the group,

Earlier, in November of 2022, in California, a landlord named Curtis Lind was stabbed in the chest with a sword.

His friend, Patrick McMillan, told authorities that Lind was stabbed in a dispute over unpaid rent. McMillan says his 80-year-old landlord had been brutally attacked by other tenants of the Vallejo property, who Lind was in the process of evicting because they hadn't paid rent in years. Now, Lind survived that attack, but he was stabbed again in January of this year, this time to death.

Also in January, a Border Patrol agent in Vermont was killed in a shootout during a traffic stop involving two people. Authorities say the agent was killed in the line of duty yesterday on Interstate 91 in Coventry. As you see here, it's about 20 miles south of the Canadian border. The FBI says one suspect in the shooting was also killed and a second suspect, a U.S. citizen, was injured and taken into custody.

The people allegedly involved in all of this violence have ties to a transgender woman blogger who goes by the name of Ziz. The group appears to have been nicknamed the Zizians by an anonymous blogger. The Zizians have roots in Silicon Valley AI research and a community known as the Rationalists. So, who are the Zizians? What do they believe? And how are they connected to Rationalists in the world of AI research?

Max Reed joins us now to answer some of those questions. He's a journalist and author of the Substack newsletter, Reed Max. Max, welcome back to On Point. Hi, Debra. How are you? I'm okay. Let's start with the Zizians, the group, and we should say they may not call themselves that, and the group's founder known as Ziz. Tell us a little more.

So Ziz comes from Alaska. She went to school at Fairbanks and moved to the Bay Area in 2016. And she wanted to find a job in the tech industry and become involved in what's known as AI safety or AI alignment, which at the time, you know, we've probably all heard these terms more recently because of how big the AI industry has become. But at the time, it was a much more sort of niche concern

The main idea of which was that as we were creating more and more powerful AI systems, we might one day create a system so powerful, so intelligent, so conscious that we have an obligation to ensure that it's aligned with human values, so to speak. This was at the time and still is a main concern of what's known as the rationalist movement, what tends to call itself the rationalist movement. And she became involved with a bunch of people who were interested in similar ideas, particularly

She would go to lectures, attend workshops, talk to people. At some point, she splits away from the rationalists. Essentially, she starts to believe that they're not taking seriously enough the sort of AI alignment problem, that there are PR issues involved with some of the people leading the movement.

And she first sort of gets on a lot of people's radar when she protests a gathering of rationalists with a bunch of her friends who we would call the Zizians, though I don't think they would call themselves the Zizians. They're all wearing Guy Fawkes masks in the manner of anonymous and handing out flyers instructing people to spurn the rationalist leadership. They get arrested. They post bail. This is just before the pandemic happened.

So the courts are moving extremely slowly and they kind of disappear, so to speak. There's not a lot of records about what's happening for the next few years. Then we get to this altercation in Vallejo where the 80-year-old landlord who owned a kind of a piece of property where he was renting out space for people to park RVs, a lot of artists and programmers,

had finally confronted a group of people who were living on this property who hadn't paid rent in a long years, and they end up stabbing him, apparently from behind, with a samurai sword. He shoots one of the people, a woman named Emma Borhanian, who dies. Another one, Somni, the person who actually, who goes by the name Somni, who actually stabbed Lind, survived. As it turns out, Ziz, who nobody has really heard of for a while, and whose lawyer had previously said might in fact be dead, was still alive and present at the scene.

The cops, for whatever reason, don't actually arrest Ziz. They drop her off at the hospital because she says she's in pain and she sort of disappears again, only to pop up again a few weeks later in Pennsylvania, where she is arrested for obstruction of justice and disorderly conduct in connection with the homicides of the Zizkos, whose daughter, Michelle, who went by Plum online, is a friend of Ziz.

Ziz gets released again, sort of is largely unheard of again. Then, as we've been hearing, in January, Carl Lind is stabbed and killed, allegedly by a data scientist named Maximilian Snyder, who, as it turns out, had previously applied for a marriage license with a woman named Teresa Youngblunt, who happened to be one of the people involved in this shootout in Vermont. So at this point, there's enough sort of...

evidence of Ziz's involvement in a number of these crimes for her to be arrested, at which point she is and remains in jail as of this moment. You need a chart here to try to keep track. It's very much a corkboard and red thread kind of a case. Keep track of everybody. But I guess what is really sort of the big or main theme here connected to these killings? Like why

You know, rationalists, if they are, in fact, supposed to use scientific thought, right, to improve the world and to do good, how do we go from that to Zizians who are clearly using violence? And how is that all sort of interconnected? Do you know?

Yeah, this is a good and sort of difficult question. I mean, it seems worth noting that the rationalist movement, so to speak, what is called rationalism in this context and in the Bay Area and around Silicon Valley is maybe not identical to the rationalism of, say, Rene Descartes or, you know, the philosophers who might have called themselves rationalists. It's in some ways a sort of self-help movement and in some ways a kind of

gathering place for people with particular interest in AI and other kinds of big, long-term thinking. And a characteristic feature isn't simply using reason to live a better life or to, you know, figure out what a good politics might be, but to kind of

unburden yourself of the conventional wisdom or morality that is irrational and therefore holding you back from understanding the sort of the true best in life. And I think that means in practice sort of pursuing these very abstract philosophical games and experiments and then trying to live your life in concert with the conclusions that you draw from those.

So, maybe a less difficult version of this or a less fraught version of this is what eventually came to be called effective altruism, which is the sort of philanthropic idea that the main goal, if you're giving money away, if you're donating to charity, is to save as many lives as possible. And if you crunch the numbers and look at all the different ways that you can do that,

It turns out that buying mosquito netting to prevent malaria in sub-Saharan Africa is, in fact, dollar for dollar the best way to save lives. But I'm still just I don't mean to interrupt you because we won't get into this, but I still am trying to figure out what kind of extremes the Zizians have gone to this sort of subsect of rationalism first and why. What would have led them to do that?

I mean, the answer to this is essentially veganism. I mean, there's like three or four different ways you can do it. Yeah. I mean, think about how strongly a person might come to believe in the crime that is factory farming or killing animals if you believe that they are sentient and can feel pain. And if you take seriously this idea, if you take it so seriously that you rearrange your life around it, you might begin to believe that you're justified in killing people in order to do so or that any lives you take in pursuit of your goal are

are collateral damage at best in what is ultimately a righteous cause. I think it's a big jump from vegan to killing people. I don't know. Well, I mean, I wouldn't want to defend the Zizians. Okay, okay. So as you said, Ziz sort of took these ideas to the extreme anyway, or at least the way defended it among members of the so-called Zizian group. But I wonder, were there other things that

suggest perhaps that this went way beyond AI, that these ideas and the philosophies that she was espousing really were more than artificial intelligence or the fate of humanity. It was really about the nature of human beings, right? Yeah. So Ziz had this idea that...

I mean, you know, we want to preface all this. This is all sort of happening on blogs. Like Ziz is writing these blogs that espouse these long philosophies that I think you or I, if we read them, would say, this sounds crazy. And I'm about to say it, and I suspect you will think to yourself, this sounds crazy. But Ziz's idea is that everybody has sort of...

there are two separate hemispheres in your brain and these hemispheres contain different persons or personalities, almost like dissociative identity disorder. And these hemispheres can be debucketed and given independent awareness so that they can each have their own sort of identity and person, personage. And you could do this by taking hallucinogenic drugs. You could do it by experimenting with sleep deprivation essentially. And,

And Ziz has this kind of moral hierarchy of people where some people are good only in one brain. And by good, she means recognize the personhood of animals. Some people are, in fact, good in both brains. And it's extremely rare to be good in both brains. And Ziz is one of very few people who is good in both brains. Of course. So as you said, Ziz is in prison. Is it still a group? Is it still a force? Are the Zizians still with us?

As far as we know, not really. I mean, this was always a relatively small group of people who were ever involved, and many of them are now, frankly, dead or in prison. You know, one of the interesting things about trying to track this story is that for a group whose most of whose activity was online, the members and Ziz herself have managed to be quite happy.

to manage to keep quite low profile. So it's actually a little bit hard to know exactly where every single member is at any given moment, what they're up to, what they're doing. But it doesn't seem like there has been a replacement leader, let's say, or that other members of the Zizian group have decided to carry on with more criminal activity in the wake of Ziz's imprisonment. Okay, we're talking about a group known as the Zizians. When we come back, we'll talk more about

the Zizians' connection to Silicon Valley and to artificial intelligence. I'm Debra Becker. This is On Point. Support for this podcast comes from Is Business Broken? A podcast from the Mehrotra Institute at BU Questrom School of Business. In a recent episode, the head of the Massachusetts Business Roundtable weighs in on tariffs, Doge, and more. There's a philosophy there that there is waste and some of this stuff is wasteful.

And we need to address that. That's more difficult to argue with than taking the chainsaw to it. Follow wherever you get your podcasts and stick around until the end of this podcast for a preview of the episode.

Invest in yourself and your team at the Massachusetts Conference for Women on December 3rd. Join WBUR and make connections, discover new opportunities, and leave with skills and strategies to benefit you and your whole team. Don't wait. Register now and be inspired with speakers like Simone Biles, Mel Robbins, Diana Nyad, and more.

You won't find these insights or connections anywhere else. Join us at the Massachusetts Conference for Women, December 3rd. Get your tickets at maconferenceforwomen.org.

The Zizians group has roots in Silicon Valley and its AI research community, and with another group that calls itself the Rationalists. So we want to take a minute now to take a closer look at who the Rationalists are. One of the leading thinkers in rationalism is Eliezer Yachowsky, who's known for his warnings about the dangers of AI. On the Robinson Earhart podcast last month, Yachowsky explained that he's worried that AI will kill all.

AI companies keep pushing and pushing on their AIs to get smarter and smarter. They get to something eventually that is smarter than us, that can kill us, that is motivated to kill us, not because it inherently wants us dead, but because it's best universe where it gets the most of what it wants.

All the atoms are being used for things that are not running humans. In fact, Murkowski is now so concerned about super intelligent AI making humans extinct, he says AI shouldn't be built at all. He recommends authorities take steps to restrict what are known as GPUs, graphics processing units, a technology that's used in training AI. Have an international clampdown on the GPUs, not in any one country.

This is everyone's problem. I mean, the basic description I would give to the current scenario is if anyone builds it, everyone dies. You need to not build it. You're not going to solve the alignment problem in the next couple of years. So, Max Fried, before the break, you were talking about rationalists and how the Zizians were really sort of a subsect of rationalists. Can you explain how this idea of an almost murderous artificial intelligence fits in with rationalist thought?

Yeah, I mean, I think this is the most prominent of the philosophical experiments that rationalists like to run with themselves. And the experiment goes something like, you know, AI is progressing. It will eventually progress into a superintelligence. If that superintelligence doesn't properly share human values, it could accidentally or on purpose destroy.

kill us all and destroy the entire world. And I recognize personally that there are a number of leaps of logic in that step-by-step description of what's happening. And I think that rationalists would insist that I'm oversimplifying it, and they're right to some extent. But that is the basic direction there. And because the scale of the Armageddon being imagined is so huge, the quest to alliance

to ensure that AI is safe, or as Yudkowsky now believes, to not build AI at all, crowds out any other concern or any other idea of what needs to be done in the world. So how many rationalists are there? I mean, I've seen some suggestions that there are hundreds of thousands of people who are involved in this kind of thinking and considering the implications, and we should say not just of artificial intelligence, but other problems of the world as well.

Yeah, I mean, hundreds of thousands is not a crazy number to put on it if you include the broad rationalist in the broadest sense. And, you know, I think there's a lot of people who are strongly influenced by rationalist thinking or by the rationalist movement who absolutely don't agree with the kind of AI apocalypse scenario that Yukowsky likes to put forward.

And, you know, rationalism is not it's not a church. It's not a membership organization. It's a few websites and a few nonprofits that people gather around and there's meetups in cities all over the place. I think this sort of real hardcore of rationalism, which is mostly concentrated in the Bay Area, that

attends the workshops and goes to the seminars and is deeply involved in debates on these websites is much smaller than hundreds of thousands. You know, I would cap it. It's a couple thousand people at absolute most and probably fewer. And yet we do know that big names in the AI community, Peter Thiel, Elon Musk, right? They've hired from within the rationalist community. They've spoken at rationalist events and done things like this. So is

Is rationalism almost a look at the psyche of Silicon Valley, or was it? I mean, could you call it that? Yeah, I think so. I mean, you know, one of the maybe the most prominent example of the influence of rationalism is that OpenAI was started as a nonprofit intended to build an artificial intelligence that was aligned with human values, not as an explicitly rationalist project, but certainly as one in line with rationalist values.

And I think the story of OpenAI as it's progressed from this nonprofit to Sam Altman now trying to turn it into a profit-making company much more similar to an Apple or a Facebook is a little bit reflective of...

the ambivalent attitude I think that many Silicon Valley titans have towards rationalism and towards AI. You say, is it a look at the psyche? You know, this sci-fi story that Yudkowsky is trying to tell about AI conquering the world, if you switched out a few proper nouns there, is also a sci-fi story about software conquering the world, is also a sci-fi story about capitalism conquering the world. And to me, I think there's a kind of projection

And going on, you know, that people are attracted to these stories because they can see a version of it happening somewhere, but have trouble sort of facing the way that it's happening already. But these are very prominent people, though, that we're talking about that are, you know, familiar with, we could at least say, and perhaps adherence to these beliefs.

Yeah. You know, again, this is an interesting kind of shift that's been going on in Silicon Valley for the last few years where there are some very prominent, especially AI researchers who are bought and paid rationalists who really truly believe in the idea of an oncoming AI singularity that could in fact be disastrous to the human race.

And for a long time, I think that has been a useful fiction for people like Altman or Microsoft CEO Satya Nadella or Elon Musk, for that matter, that there is a it's almost like a marketing tool. You know, we're so close to creating content.

a god in our computers. It feels science fictional and futuristic and crazy. Now that we're at a point where maybe we can start making money from AI and you have all these guys flapping their gums about how it might actually kill us all, it's become very inconvenient for rationalism to be, to have as prominent a place as it once did. So you see, you know, corporate drama like at OpenAI 18 months ago when Sam Altman was briefly forced out over precisely these kinds of debates and discussions.

So, you know, rationalism is still enormously influential in Silicon Valley, but I think its influence is maybe waning a bit or it's shifting a little bit as artificial intelligence becomes more prominent as a profit center for the industry. Now, you mentioned the role of effective altruism before the break. So I'm wondering if you could tie that in a little bit for us and explain. Is that basically the money backer of a lot of rationalist thought?

To some extent, you know, effective altruism, as I think I was saying before the break, has branches that are much more focused on things like mosquito nets to prevent malaria and really specifically trying to save the most lives as possible per dollar. But there is a kind of heavily abstract version of it, maybe most famous with the philosopher Will McCaskill and his idea of what's called long-termism, which is effectively that you should be –

putting your money towards and your charitable and philanthropic efforts towards not just people who are alive today, but people who will be alive in the future, that those people deserve our consideration perhaps more than the people who are alive today. And again, this is a similar sort of abstract philosophical game that ends up with real world consequences. Sam Bankman Freed sort of famously

believed in what's called earn to give, which is that you should make as much money as possible in order to donate as much money as possible and channel it towards your preferred effective altruistic charities. And, you know, it's not that much of a leap to take that seriously and then decide, well, what's a little bit of fraud if it means I'm making more money in order to donate it more

to even better charities to even further these goals. You know, the ends, as in any kind of form of utilitarianism, you run up against the question of whether or not the ends justify the means very quickly. And for a lot of people like Sam Bankman-Fried, the ends very clearly justified the means.

And so when we were talking about this group, the Zizians, explain to me then the split there. Was it because even in those years when the Zizians started gathering some momentum that it was already clear that rationalism was sort of waning or not necessarily? No. So around the time that Ziz really broke off – so I would say –

kind of hedge to all this is that this is a the really hardcore group of rationalists is a tightly knit and I would say emotionally intense group of people and I think that to some extent splits like this happen just because personalities simply don't align and I think you know given what we know about Ziz it's maybe not surprising that she didn't align with a lot of other people that she had trouble meshing and gelling but really specifically her initial complaint around the the

where she protested with her friends and followers and handed out pamphlets, was the sense that there was a sort of a rot at the heart of rationalism, which was connected to what were at the time quite serious rumors about sexual assault, rape, even pedophilia sort of at the upper echelons of the organization.

And interestingly, Ziz was not quite saying this is all bunk because I because these people are creeps and predators and awful. She was saying the fact that these prominent people are being accused of being creeps and predators is going to undermine our movement.

This is a complicated issue, but it has come out since then through a lot of meticulous reporting that there does seem to be a kind of endemic sexual harassment problem in rationalism, that there are a lot of people who are being preyed on in one way or another. And that I think that has maybe helped a little bit of the fraying that has occurred and the waning of the influence. Yeah.

And so have there been other split-off groups besides Zizians that have also said there's a problem here and I would like to take some of these tenets and continue to work on them but in a different way with different leadership perhaps?

Yeah, I mean, I think definitely there are lots of breakaway groups or people who maybe would still call themselves rationalists. But as I said, don't buy Yudkowsky's whole AI thing. There are also a number of groups that could be sort of credibly abused of being cults or cult-like in the same way the Zizians are. There have been a few different essays written about

You know, in 2021, a woman in Zoe Kersey wrote a post about a nonprofit called Leverage Research, which was a sort of rationalism adjacent group that featured what they called debugging sessions where they would articulate demons inside their psyches and then flush it out of their systems using so-called debugging tools. They were expecting to take over the U.S. government. That sounds bizarre.

very much like a cult to me. A woman, Jessica Taylor, who has also written a lot about the Zizians and who is involved in two of the most prominent rationalist groups, the Machine Intelligence Research Institute and the Center for Applied Rationality, wrote about her experiences with those two groups, again, involving these sort of debugging, deprogramming ideas, feeling like she was being isolated from her friends and family because she was working on these cosmically high-level problems,

There's a person named Michael Vassar who has been accused of sort of cultivating a cult-like atmosphere around himself. So, you know, you don't want to over-rely on the idea that these are really specific sects or cults that are creating armies and have bunk beds in some basement somewhere. But there's obviously some dynamic at the core of what's going on in the rationalist movement at the

at the very heart of the rationalist movement that gives rise to these kinds of cult-like formations. Right. And what dynamic is it? Would you say it's a psychological dynamic almost of the members who sort of think of leaving conventional societal thought and come up with their own ideas? And there may be some weaknesses there that are exploited or manipulated. I mean, how do you describe what happens here? Yeah.

Yeah, I mean, I think that rationalism itself, the tenets of rationalism tend to encourage a susceptibility to cults. You know, it's the kind of thing you say, if you want to be a rationalist, you need to be

curious. You need to want to improve yourself. You need to be insecure about your own sort of epistemological, ontological frameworks. You need to believe that there maybe is a higher, different, better, rational truth that you don't have access to yet. You need to be able to pursue that. And then also you need to be able to follow that, you know, the conclusions that you reach. You need to be able to follow them to their fullest extent, so to speak.

And again, these are all, you know, these are sort of qualities that I think in the abstract or in the individual, we might say it's good. It's good to be curious. It's good to, you know, be epistemically humble and to say, I don't really know everything. But you take it all together and you put it in a place, you put around a bunch of people who feel like they've really figured it out, who feel like they understand what's coming. And you've kind of got a...

breeding grounds for this kind of cult, that you've got a bunch of people who can be talked into things. You know, and I think there's a sort of historical context here, which is that California, and especially the Bay Area, has for a long time been a breeding ground for seekers and searchers who come from all over the country, who arrive somewhere in these sort of vaguely self-help, vaguely political, vaguely whatever groups that turn out to be

indistinguishable from cults, more or less, and sometimes quite violent cults. So I think that, you know, part of what's going on, I don't think it's an accident, for example, that Ziz's group seems to be composed of a majority of trans people. Not to say that, you know, that this is like inherent in a trans person, but if you run away from home, maybe because your parents don't really accept who you are and you arrive in a new place

You're struggling with questions of identity. You're struggling with questions of belonging and family. You're very intelligent. You're a programmer. You're computer interested. You're interested in philosophy. You can see how, you know, finding yourself in the wrong place at the wrong time, you might end up with a bunch of people who are going to take advantage of you. So you think that that's a common characteristic here. It's a characteristic of the group members more than anything else.

Yeah, I think of maybe the people who are susceptible to this. I think there's the other version of that is people who probably always would have been a cult leader of some kind. You know, the movement is also very friendly to domineering, charismatic, I'm the right

I know what's right kinds of people who, uh, who can walk in and command attention in that way. Um, you know, I think that, I think that it's, it's, it's, it's the combination of the psychological profile with the particular tenants of belief that, that, that sort of is a, is a, is a,

a difficult brew to kind of be a part of and not find yourself falling prey to cult-like ideas. Right, but typically doesn't there also have to be a fear, right? A fear of something happening if you don't go along with this kind of thinking. And so is the fear the destruction of the world by AI? And is that a concept that everyone believes and really wants to work hard to avoid?

I think that's the main one, yeah. You know, I think with Ziz, the fear is also about, I mean, is about animal genocide, effectively, animal omnicide. There's a fear that we're all participating in some unbelievable crime against sentience that we need to push ourselves out of. But AI is the sort of dominant, some version of the AI fear, I think, is the dominant fear around

around the campuses, around the sort of seminars and groups that we're talking about. And I think that the...

That fear also has a sort of positive side, which is the belief that what you're doing is so important that you can put everything else aside. You can put your family and friends and previous connections and job and everything else aside in order to focus on this thing that's going to destroy us all. We're talking about rationalism and its role in AI research in Silicon Valley. When we come back, we'll hear from someone who works in AI research about her experience there.

with the rationalists. I'm Debra Becker. This is On Point. Support for AI coverage in On Point comes from MathWorks, creator of MATLAB and Simulink software for technical computing and model-based design. MathWorks, accelerating the pace of discovery in engineering and science. Learn more at mathworks.com.

Quick word about a show we're working on for later this week. We're looking into the best ways to live healthier for longer. And we want to know how you're thinking about your lifespan. What are you doing now to live healthy well into old age? When did you start to adjust your routines?

Plus, what questions do you have about longevity? We'll put those to super aging expert, Dr. Eric Topol. Share your experience by recording a message in the On Point VoxPop app. If it's not on your phone already, just search for On Point VoxPop wherever you get your apps.

You can leave us a voicemail as well at 617-353-0683. That's 617-353-0683. We're talking this hour about the debate over AI research and rationalism. It's a movement focused on protecting society from various things, including runaway artificial intelligence.

Now, this idea of rationalism was introduced to Sonia Joseph when she was just 14 years old. That's when she stumbled on an online post

by Eliezer Yudkowsky called Harry Potter and the Methods of Rationality. So it's the same setup as the original Harry Potter, except in this version, Harry is supposed to be like a hyper genius. He's like a child prodigy. So he uses his intellect, but also these principles of rationalism to make his way about the wizarding world. And like, it's very much like a fiction that values the

reason, agency, intellect, like thinking from first principles. So you will have these experiments where like Harry and Draco will try to run these scientific studies as to whether muggle-borns are actually inferior at magic. And like they discover that they're not.

Sonia loved the fan fiction and she wanted to learn more. When she got to college around 2013, she found a rationalist community that met online and in person to talk about AI research and other ideas. So she started going to rationalist get-togethers.

After she graduated, Sonia moved to San Francisco for a job in AI research. A lot of her roommates were rationalists. One of the houses I stayed at, it was a Victorian mansion near Alamo Square. You enter and there are all these rooms and there's often like a common area and people will gather on the couches in the common area and all talk about AI killing us. A lot of these...

A lot of the roommates would work at the two major AI labs there. These houses would often become like professional networking grounds for breaking into AI. We'd invite speakers to come over and like give talks. We would often like host parties. There would often be like drug use at these houses, but it's always framed under like we're going to use LSD or ayahuasca to like

explore consciousness. It's not framed in like a party animal rave kind of way. It's framed in like an intellectual cerebral way. And this world, Sonia says, can become all consuming. So if you're living in a house with a bunch of other AI researchers who all believe that we're going to die in two years, and that's like the only thing you're surrounded by, it's like 24 seven, there's no escape. And like work and life are very blurred. And of course you have like billions of dollars flowing into this ecosystem. So like power dynamics and like relational dynamics,

all get blurred up in a way that I think is unhealthy at best. And at worst, it can actually lead to these strong cult-like dynamics. Even though she spent a lot of time with rationalists and takes AI safety seriously, Sonia says she considered herself outside of the rationalist community.

She says she didn't like the way she felt and the way she and other women were treated by the community. For example, she met one rationalist who told her that he was the inspiration for the Voldemort character in the rationalist Harry Potter fan fiction.

He would go on to say some pretty concerning things to me when we got dinner. Like, you need to think from first principles. If you think from first principles, a lot of human society doesn't make sense. Things like the age of consent is actually way too high. Relationships with young girls, 12-year-old girls, are actually normal. It's like a normal transfer of knowledge. Stuff that I found very morally concerning. But I think there's a stereotype among certain libertarian spheres which overlap with rationalist spheres.

you can kind of re-derive morality. And a lot of things that we take for granted as like human rights become open for debate again. Eventually, Sonia decided she had to get away from Silicon Valley. Now, she tells that she wanted some distance from some of the, quote, culty behavior.

So she moved to Montreal, where she's now a visiting AI researcher at Meta and working on a PhD. She says many rationalist ideas have become well-known in the past few years, but for her, they don't have the same appeal. Because like 10 years ago, it was so niche.

It's like, mom, I'm talking to this like fringe cult on the internet. But all these ideas have become so mainstream just because AI has become mainstream. And in my opinion, this is like largely a good thing because it's harder to be culty when you're so mainstream. But I often do miss the esoteric aspect of it. It's like,

It was like being part of like a secret society or something. That was Sonia Joseph. She's an AI researcher who waded into the rationalist community. Max Reed is with us today. He's a journalist and author of the Substack newsletter, Read Max. And Max, I wonder, do you think that's sort of a typical experience that we just heard from Sonia there about getting involved in this community and perhaps becoming disillusioned by it?

Yeah, I think it is. You know, as I said, especially around the pandemic, there was a number of people who came and wrote pieces about their experiences. And I think especially young women who became involved and found themselves subject to exactly the kinds of stories that you just heard.

recognized that maybe there wasn't quite the sort of pure pursuit of rational ideas at the heart of what was going on, but the same kinds of messed up social dynamics that they were experiencing elsewhere. I know we spoke a little bit about this before the break, but what about...

the psychology here, this idea of perhaps feeling special in a way. It was in the Harry Potter fiction, you know, fan fiction about this. It was this idea that you're special, you're chosen to help save the world from this awful thing. I wonder how that idea of importance plays in here.

I think one thing I would say is I think the kind of person who becomes attracted to this is likely a really intelligent person who maybe has had some trouble in social situations in their schooling, you know, so that you are maybe outpacing your peers in math or science or reading and you're having a little bit of trouble getting along with other people and you come across this version of Harry Potter, say, that speaks to your, the way you think about things and perhaps

Tells you that you're not the weird one necessarily. You're not the crazy one I'm not saying this is a universal experience necessarily but a version of that can happen in all kinds of ways and That is built upon by the structure of rationalist thought where it's not, you know, you are the special one You are the you are the Harry Potter so to speak

who is able to see the world in the right way, who's able to see the sort of the rational structures undergirding the world. And you can be part of this group of people that's going to save us all from, you know,

destruction by AI. I mean, that's a really powerful story to hear, especially if you're hearing it at a time when you're otherwise not particularly valued or you're not feeling particularly valued yourself. And I think that's an obvious way that people could get swept up in

Right. But many times in groups like this that are described as being cult-like, they have ways to keep people in line to make sure that they don't speak out and say things that the group doesn't want out in public, things about the sexual harassment that you talked about and Sonia talked about and things like that. So what are the methods that they use? Is there anything? How do they retaliate if they think something is not accurate or goes against what the group is trying to achieve?

Well, there's a few few ways to answer this question. I mean, one is that I think for people who are who notice, say, sexual harassment and feel compelled to speak out about it, but maybe still buy into the basic tenets of rationalism. The leverage is if you speak out about this, you're going to destroy our mission of rationalizing.

you know, of stopping bad AI. So you need to keep quiet because otherwise it's just the whole thing is going to fall apart. Another thing is, you know, I mentioned this before the break, a lot of people end up isolated from their friends and family because this work becomes so-called work becomes so all consuming to them that they get so deeply involved that they don't really have anywhere else to turn. So the leverage there is less a threat than it is the kind of sense that they're they've

a person maybe has been so fully isolated that there's no one to complain to or nowhere to make a change from. And then, you know, I think the sort of final, the final challenge

Problem here is just that there's once you're once you're partway in, it's very hard to sort of pull yourself all the way out that if you enter this because you already think of the world in this kind of partly rational way that exiting it and seeing exactly all the connections that have that have that have that have messed you up, so to speak, is really hard to that's a that's a really hard thing to accomplish. Yeah.

What would you say maybe replaced or now complements rationalism in Silicon Valley, if anything? If this was sort of a big tenant and as you said, it's waning a bit and there are lots of folks questioning exactly what's going on here. Has anything taken its place?

Not precisely. I mean, I think that AI as a business endeavor is so all-consuming in the Valley right now that there's just a lot of excitement for it. Maybe less cult-like, maybe less sort of systemically, you know,

But that people are rushing to find jobs and startups and niches that they can fill in what is hoped to be a real gold rush. You know, you're seeing the emergence. We're getting a little in the weeds here. I apologize to viewers who don't want to hear about it. But the emergence of new groups of what people who call themselves post-rationalists.

who maybe have some of the same concerns as rationalists, the same ideas about wanting to push forward through, you know, blinkered conventional wisdom and find different ways of being, but are trying to leave aside the Yudkowskian sort of philosophy

nerdy message board ideas and focus more on esoteric philosophical ideas and sort of living in the world. You know, all of it is in flux. And I don't, I would never count out the rationalists, let's say, because I think that

The idea of looking past a false world into a fully rational good one holds a lot of appeal to the same kinds of people who become programmers and engineers. And to that extent, there will always be rationalists.

of some breed in Silicon Valley. And I think a lot is going to depend on how the industry goes, where the money is coming from, and where it's headed over the next 10 years. And I guess with all of this money and power concentrated in AI and this movement, I mean, what do you think it means for the people who are doing AI research, who are building AI right now?

I mean, I suspect that there are even some of them listening right now feeling frustrated that I haven't said that probably the large majority of AI researchers, both academic and private and those working at private companies, are not rationalists in the hardcore sense, that they don't really believe in an oncoming, you know, omnicidal machine god who's

And that's what he's saying. Stop building it.

Yeah.

requires to go from the thing that will produce text for you on a whim to the thing that is going to put us all in, you know, I don't know what, jail or destroy us with nuclear weapons or whatever. And so that kind of hyper, that overstated apocalyptic case just

because it hasn't really panned out, not only does it mean that it gets taken somewhat less seriously over the years, it also means that people who are working on other areas who have other concerns or their ideas about how AI might work and might be implemented are given a little bit more space for themselves. You know, for better and for worse. That doesn't necessarily mean we're going to get harmless AI that's going to do good for everybody, but maybe we could also think less about sort of the

the coming apocalypse and more about the way AI systems are implemented, say, in day-to-day politics, the way Doge is said to have used certain kinds of AI systems to try and put together its cuts. You know, that's an immediate concern that is too easily drowned out by the kinds of Yudkowskian long-termist, like, watch out for this thing that's going to turn us all into paperclips concerns. Mm-hmm.

What do you think that this idea of rationalists and sort of the Zizians as a subsect of rationalists that really shed a light because of the sensationalism involved on sort of rationalist thought and this community, what does it mean for average people?

Do you, you know what, really, when you're writing about this, you're spending an awful lot of time that you're devoting to understanding this. What does it mean for the rest of us who are not in the AI research community? What message do you take from it, Max?

I think that it gives you a window into how far out a certain segment of the AI research community has gotten and how far out a certain portion of the software industry has gone. And I think that's a really important insight to have because it means that when you hear somebody like Yudkowsky saying, we need to stop the production of GPUs, we can't possibly let AI develop anymore.

you're able to come to that with a slightly, you know, with a recognition that this is a guy only a few degrees removed from a murderous cult, like a strange murderous cult across the country. And, you know, it doesn't mean that we don't have to hear Yudkowsky out, but I think that we can contextualize a little bit better where he's coming from if we understand that there is a social, psychological, you know, sort of cultural, historical dynamic at play that is leading towards

a particularly sort of apocalyptic, almost religious view of artificial intelligence that is just really not very likely to be borne out. Really? You don't think that there's any chance that we have to worry about some kind of super intelligence making humans extinct?

No, I mean, certainly not one based on LLMs, I wouldn't say. I mean, again, I've been using ChatGPT enough to really not be very worried that it's got one over on me. Okay, well, we'll take that message. That'll be a hopeful message. Don't worry about it too much. And I promise I'm not involved in any cult, so everybody can take me very seriously. Max Reed, who is author of the Substack newsletter, Read Max. Thanks so much for joining us today. Thank you for having me.

And I'm Debra Becker. This is On Point. Support for this podcast comes from Is Business Broken? A podcast from the Mehrotra Institute at BU Questrom School of Business. Listen on for a sneak preview of a recent episode with J.D. Chesloff, president and CEO of the Massachusetts Business Roundtable. I do think it's instructive to think about the philosophy behind this current environment.

And I think philosophically, what is driving it is an attempt to reverse or address globalization. And I think that's an interesting opportunity for a debate. Do we acknowledge that we're in a global economy or do we want to be more insular? When we were down in D.C. last week, we were having a conversation about this and someone said to us,

You know, we were talking about the uncertainty that tariffs are creating in each of our states. There are probably, like I said, 17 or so state roundtables down there. And the response was, look, if our country is beholden to another country, if they control our supply chain, if there is a trade imbalance that's in their favor, then they could impact the supply chain and our economy at a moment's notice. And that is what the proponents of this current

policy, terror policy, are trying to address, right? There is uncertainty there, is what we were told. And that uncertainty is something we want to address. So it's a different philosophy. And so if you are looking to be less active in globalization, a global economy, then yeah, I think what they're doing makes some sense. If you don't believe that and you believe we are a citizen of a global economy, then perhaps it's not the right strategy.

Find the full episode by searching for Is Business Broken wherever you get your podcasts and learn more about the Mehrotra Institute for Business Markets and Society at ibms.bu.edu.