We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How Elon Musk's X Failed During the Israel-Hamas Conflict

How Elon Musk's X Failed During the Israel-Hamas Conflict

2023/10/11
logo of podcast On the Media

On the Media

AI Deep Dive AI Chapters Transcript
People
A
Avi Asher-Schapiro
B
Brooke Gladstone
Topics
Brooke Gladstone:在以色列-哈马斯冲突期间,Elon Musk的X平台(前身为Twitter)充斥着未经验证的图片、视频和信息,导致严重的混乱和冲突。许多内容是旧图片或视频被重新包装,甚至有些军事画面来自电子游戏。付费认证的匿名账户也加剧了虚假信息的传播。马斯克推荐的账号也曾发布过虚假信息和反犹太言论,进一步加剧了问题。欧盟也要求马斯克处理平台上的虚假信息,但效果不佳。 Avi Asher-Schapiro:X平台上很容易找到虚假信息,即使是经过验证的大账号也会传播虚假信息。一个被广泛传播的关于妇女被烧死的视频其实是七年前危地马拉的旧视频,说明虚假信息传播之迅速和影响之大。X平台缺乏透明度,难以了解其内容审核机制。马斯克的付费认证机制激励了不良行为,加剧了虚假信息的传播。马斯克裁员和削弱内容审核团队,导致平台对虚假信息的控制能力下降。依靠志愿者进行内容审核存在不足,无法有效应对大量虚假信息。X平台的盈利模式缺乏有效的监管机制,可能导致虚假信息泛滥。马斯克推荐的账号传播虚假信息,加剧了平台上的虚假信息问题。马斯克声称平台更加开放和民主,但实际上只是用他个人的偏好取代了之前的偏见。算法驱动的信息流存在缺陷,缺乏透明度和问责制。虽然之前的Twitter也存在问题,但马斯克的改变加剧了这些问题。马斯克的行为使得评估其平台的社会影响变得更加困难。与乌克兰战争初期相比,X平台对以色列-哈马斯冲突的应对缺乏透明度和细节。 Avi Asher-Schapiro: X平台的信息传播机制存在严重缺陷,导致虚假信息大量传播,影响了对冲突的真实记录。算法驱动的信息流加剧了这一问题,使得人们难以分辨真假信息。与以往的平台相比,X平台对虚假信息的处理方式更加被动,缺乏有效的监管和问责机制。马斯克的个人偏好和决策也影响了平台的内容审核和信息传播。

Deep Dive

Chapters
The episode discusses the spread of misinformation on Elon Musk's X during the Israel-Hamas conflict, highlighting the platform's failure to manage the influx of unverified content and the impact of Musk's policy changes.

Shownotes Transcript

Translations:
中文

This episode is brought to you by Progressive Insurance. Whether you love true crime or comedy, celebrity interviews or news, you call the shots on what's in your podcast queue. And guess what? Now you can call them on your auto insurance too with the Name Your Price tool from Progressive. It works just the way it sounds. You tell Progressive how much you want to pay for car insurance and they'll show you coverage options that fit your budget. Get your quote today at Progressive.com to join the over 28 million drivers who trust Progressive.

Progressive Casualty Insurance Company and Affiliates. Price and coverage match limited by state law. When you need to work quickly and confidently, you need Grammarly. It's a trusted AI writing partner that helps you get work done faster with better writing. And it works where you work across 500,000 apps and websites. 96% of users agree Grammarly helps them craft more impactful writing. Get AI writing support that works where you work.

Sign up and download for free at Grammarly.com slash podcast. That's Grammarly.com slash podcast. Grammarly. Easier said, done. Listener supported. WNYC Studios.

This is On the Media's Midweek Podcast. I'm Brooke Gladstone. This week, Bloomberg reported that social media posts about Israel and Hamas have led to a sticky cesspool of confusion and conflict on Elon Musk's ex.

On Saturday, just hours after Hamas fighters from Gaza surged into Israel, unverified photos and videos of missile airstrikes, buildings and homes being destroyed, and other posts depicting military violence in Israel and Gaza crowded the platform.

But some of the horror was actually old images passed off as new. We've also spotted many fake videos circulating on a line, in particular claiming to show children captured by Hamas, especially this appalling

a video that's been circulating and seen two million times on this post only on X. Some of this content was posted by anonymous accounts that carried blue check marks signaling that they had purchased verification under X's premium subscription service. Some military footage circulating on X was drawn from video games.

On Sunday, Elon Musk recommended to his 159 million followers two accounts for, quote, following the war in real time. These same accounts have made false claims or anti-Semitic comments in the past. Avi Asher Shapiro covers tech for the Thomson Reuters Foundation. Welcome back to the show, Avi. Thanks for having me, Brooke.

So on Tuesday, the European Union industry chief Thierry Breton told Elon Musk that he needed to tackle the spread of disinformation on X to comply with new EU online content rules. Didn't Musk take some stuff down afterwards? No.

I'm not sure about that. I've been looking at X for the last week or so to see how easy it is to find something that's just completely made up about what's going on in Israel and Gaza and see how easy it is to find a verified or big account pushing something that's just obviously made up. That to

That typically takes me a couple seconds. I found this terrible video that was tweeted over and over again of a woman being burned alive that had been passed off as happening in the conflict. And it was a seven-year-old video from Guatemala. And then, you know, a day later, it disappeared. That one really got around. In my doctor's office, the doctor's assistant said that it had kept her up all night crying. That image itself? Yeah. Yeah.

Wow, wow. Wasn't that repurposed everywhere from India to the Middle East? Yes, it tends to crop up during crises. Opportunistic people put it online to try to generate interest online.

You asked the question, didn't Musk take stuff down? I mean, I have no idea why that post doesn't exist anymore on X, right? There's a real lack of transparency about what's going on and how they're tackling these issues. There's all sorts of rules. You know, the company has stopped issuing transparency reports, which are the tools that we as reporters would use to sort of parse information.

how policies were enforced. And they've threatened to sue independent researchers who have tried to measure the spread of hateful content on the website. It is a bit of a black box as to what's going on. And we are reduced to sort of looking at the feet ourselves and saying, wow, there's a lot of crazy stuff on here that doesn't look true. It's weird, isn't it? Because there's so much true stuff that's horrible enough.

Yeah, I mean, that's what I've been saying over and over again. I mean, isn't there enough bone-chilling images and video to go around? One of the things you have to understand is that Musk has significantly changed the incentives for how people can use his platform. Before he took over,

there wasn't a route to make money as a creator on Twitter. By creating this verification scheme where you pay for verification, which allows you to get more reach and get injected in people's algorithmic feed, and then you can get paid out on the other side. If you have a viral tweet, you can get a share of the revenue.

He has created the conditions to incentivize some potentially very unsavory behavior in a moment like this. And the question is, has he created the parallel institutions, rules, hired the staff to guard against the worst externalities of that kind of economic system on the platform? I don't know. We know that he fired half of the staff when he took over. We know he's steadily...

ground down the trust and safety and other teams that are supposedly tasked with doing this kind of work. He just completely axed huge content moderation teams, people who had language expertise. Before Must took over, Twitter was sort of a hybrid platform. They had hundreds of staffers who were doing editorial-style functions, right? There'd be something happening, they would create a moment, they would create these sort of carousels where they put authoritative sources around a certain issue. All of that's gone.

It's been outsourced to this thing called Community Notes. Musk is trying to do what they used to pay people to do with volunteers. So now they have people who volunteer to append labels to tweets. And, you know, there's stuff that's really good about that. You know, it's more democratic, but...

You had a great piece earlier in this week by NBC's tech team that got inside of the notes program and saw that just they were overwhelmed. You know, they didn't have enough volunteers. There wasn't professional staff doing the work. And meanwhile, they were racking up hundreds of thousands of views, making claims about churches being destroyed that weren't, about military aid being provisioned that wasn't. You know, and this was all while this kind of beleaguered team of volunteers who are now on the front lines of this are made to label this at the pace that they can manage it.

To recap, Twitter was not a platform where you could monetize your engagement by clicks. Now it is. Other platforms have done that, like YouTube. And apparently they had a stiff learning curve in figuring out which accounts could be monetized under what circumstances and so forth. Do we have any clear guidelines from X about when you can and how you can monetize or when they will demonetize your account?

They have a page on their website, which is called Creator Monetization Standards, which does lay out all of the different things that you can do to lose your monetization privileges, right? They have all sorts of things. If you're promoting a pyramid scheme, literally they have a section that says, if you're promoting miracle cures,

you could lose your monetization. For me, the key question here is what kind of architecture has Twitter built around its monetization program to actually create good incentives for people who want to make money on the platform to not go viral posting demonstrably false information or titillating information that's misleading? I just don't know. I think they have an obligation to

Over the weekend, Musk tweeted to his 150 million followers that they should follow two accounts for updated information on the conflict. You know, he was giving guidance. One of them was to a place called At War Monitor and the other was At CIS.

scent defender. These are two accounts known for spewing lies. And although he eventually took down the tweet, 11 million people ended up seeing it. So tell me, what kind of hard truths are you deriving from the propping up of so-called citizen journalists?

There's been a couple instances where people have found like strange anti-Semitic things that one of the accounts has said. And then I think they had made mistakes in the past where they've sort of claimed certain things had happened and they were wrong. We

What Musk has done is said that Twitter put its thumbs on the scale in the past, that the people, the executives and the people who worked at the company had certain political biases that they were not being honest about and that he has ushered in a new era of sort of openness and democratic horizontalism on the platform. But

But really what he's done is he has just replaced it with a new set of preferences that sort of revolve around him, right? And so you'll see that in moments like this where he's like, huh, like, what am I finding interesting on the internet? Like, let me recommend it. Like, it's a very like personalist approach to ruling the platform, right? And you see it in his unilateral decisions about

rules. But then you also see it in more subtle ways around introducing algorithmic feeds. In the past, Twitter had human beings curating moments and authoritative sources. Now they're using algorithms. Is that really fairer or better? No, it's like he's come up with a different way of displaying you information that has his own set of pitfalls. As we were talking about earlier, like

I don't really know why a certain thing is in my feed. It's a verified account that I don't follow. I've never looked at before. Twitter has decided to inject it in my feed. That's an editorial decision they've made.

And there's no accountability. I mean, that's the difference between his brand of citizen journalism, I guess, and what responsible news outlets do. I want to be clear about one thing that I really think is important to underline, which is that, you know, Twitter before Musk had a lot of problems. And like, as someone who reported on the platform and tried to bring accountability to the platform, there were certain things they hid that were particularly frustrating to me. For example, you know,

although they released transparency reports, they didn't release information about when a government would contact them and ask them to take down information that was violative of their terms of service. And that was a real problem. So you have to understand that these platforms, it's not like there was this perfect thing that Musk came in and threw a wrecking ball into. But

That being said, he has taken a lot of steps that have made it even harder to assess the social impact of his platform. And I think threatening to sue researchers who try to collect information about the spread of hateful speech is really chilling. How are you meant to keep tabs on this place that's run by the richest man in the world who seems to have a trigger finger on defamation lawsuits?

So how did X respond to inquiries about all the disinformation proliferating on the platform around Israel and Hamas? How did this response compare to Twitter's initial response to the flood of information in the aftermath of Russia's invasion of Ukraine back in February 22? Right.

I've been speaking to people at Twitter who were on the front lines when the company was trying to respond to the initial days of the Ukraine crisis. You see a totally different posture. They had human rights lawyers on staff. They had...

Trust and Safety Council of NGOs and groups around the world with experts. They were consulting about how to make these decisions. They released some groundbreaking new rules around images of prisoners of war, where they were trying to apply international humanitarian law to how the company was dealing with images coming out of the conflict. So I'm sure they made a lot of mistakes, and I'm sure that there is plenty of things you could point out. But

it was a very different posture, right? At the moment now, like there's very little information coming out of the company. They've tweeted some long, you know, mega length tweets saying that they're like taking this seriously and that they have staff looking at stuff, but they don't have the same level of sort of

granular detail, like long blog posts that the company's kind of policy teams were releasing in the early days of the Ukraine war, where they were trying to sort of communicate to the public how they were going to handle information coming out of the conflicts. Okay, so then how do more responsible actual news outlets try to staunch the flow of false wartime videos and images?

There's ways that you can basically use satellite imagery, you can use mapping technology, you can use metadata, that you can look at an image that's being posted online and say with a reasonable degree of certainty, where was this taken? Using context clues, looking at images in the background and say, ah, this is actually from Gaza.

This is not from Guatemala. You can look at the metadata inside of images or inside of videos to sort of get a sense of who might have originally posted it. Is that how you figured out that the burning girl was not from Israel? No, Brooke. I figured that out by Googling the words burning girl video. And the first thing that came up was a CNN story from 2015 that said, here is a

viral video of this terrible thing that happened in Guatemala. It was a two-second thing that whoever had posted that, which was a verified account that had thousands of followers, had either not bothered to do or they themselves had actually re-skinned this video and passed it off. I'm not sure the origins of it, but you don't need to be a whiz to debunk some of this stuff, but it's really about the investigative power of it. You know, if you've seen these

Even though the Israeli army denied it?

Right. And how did they do it? Right. You know, there's these firms like Forensic Architecture who can go and recreate in digital form like these streets and the ballistics. I mean, and then you pair it with images taken from the time. And so they're amazing investigative tools. And the fact that newsrooms are putting these kinds of people into the field does all of us a service because I think we ultimately will have a clear eyed sense of what's going on in Israel-Palestine.

If we can stipulate that we've seen the utter failure of X during this crisis, what are we missing out on? Because there are other sites, Mastodon, Blue Sky, they've tried. None obviously have risen yet, certainly, to the influence that Twitter has had in terms of being a springboard for awareness and protest and pure information.

Well, one of the trends that people are talking a lot about is the TikTokification or the sort of like discordification of social media. This era of sort of like an open platform where you could sort of search around and you sort of like create your own experience and people would sort of post to the world.

and you would go figure out what you wanted to follow. That it's ending, right? And that what's replacing it is much more either algorithmically driven places like TikTok, where you just turn it on and strap in. You're like, show me what you got. Or these places like Discord and Telegram, which are closed communities, which are not searchable in the same way, where people kind of cousin themselves off into different little groups and share there. And

So I think it's possible we are missing this era that I think had a lot of positive externalities. People kind of got to choose a little bit about what they saw, and they could search widely around the world and learn about things. There was an openness to the design. I don't think people are going to design like that anymore. So what do you think are the consequences of an even greater amount of misinformation than usual on this platform in the context of this conflict?

What's happening right now is the creation in real time of a historical record of this terrible, terrible, bloody conflict. And flooding the zone with BS doesn't help anybody, right? There's always been a fog of war, but an algorithmically driven fog of war that actually injects potentially false information in front of our eyes as we scroll through the internet is a different level of dystopian thinking.

Yeah, an algorithmically driven fog of war does a level of disservice to the public discourse that we haven't seen before. Avi, thank you very much. Thank you, Brooke. Avi Asher Shapiro covers tech for the Thomson Reuters Foundation.

Thanks for listening to OTM's Midweek Podcast. And please check out The Big Show, which posts on Friday. It's the final part of our three-part collaboration with ProPublica called We Don't Talk About Leonard. And it's about the conservative movement's well-funded effort to take over America's courts and the man at the center of it all. You can find all three parts, of course, wherever you get your podcasts.