How Elon Musk's X Failed During the Israel-Hamas Conflict
Brooke Gladstone: This is On The Media's midweek podcast. I'm Brooke Gladstone. This week, Bloomberg reported that social media posts about Israel and Hamas have led to a sticky cesspool of confusion and conflict on Elon Musk's X. On Saturday, just hours after Hamas fighters from Gaza surged into Israel, unverified photos and videos of missile airstrikes, buildings and homes being destroyed, and other posts depicting military violence in Israel and Gaza crowded the platform, but some of the horror was actually old images passed off as new.
Speaker 2: [crosstalk] social media, we've also spotted many fake videos circulating online, in particular claiming to show children captured by Hamas, especially this appalling video that's been circulating and seen 2 million times on this post only on X.
Brooke Gladstone: Some of this content was posted by anonymous accounts that carried blue checkmarks, signaling that they had purchased verification under X's premium subscription service. Some military footage circulating on X was drawn from video games. On Sunday, Elon Musk recommended to his 159 million followers two accounts for "following the war in real-time." These same accounts have made false claims or anti-Semitic comments in the past. Avi Asher-Schapiro covers tech for the Thomson Reuters Foundation. Welcome back to the show Avi.
Avi Asher-Schapiro: Thanks for having me, Brooke.
Brooke Gladstone: On Tuesday, the European Union Industry Chief, Thierry Breton, told Elon Musk that he needed to tackle the spread of disinformation on X to comply with new EU online content rules. Didn't Musk take some stuff down afterwards?
Avi Asher-Schapiro: I'm not sure about that. I've been looking at X for the last week or so to see how easy it is to find something that's just completely made up about what's going on in Israel and Gaza and see how easy it is to find a verified or big account pushing something that's just obviously made up. That typically takes me a couple seconds. I found this terrible video that was tweeted over and over again of a woman being burned alive that had been passed off as happening in the conflict, and it was a seven-year-old video from Guatemala. Then a day later it disappeared.
Brooke Gladstone: That one really got around. In my doctor's office, the doctor's assistant said that it had kept her up all night crying.
Avi Asher-Schapiro: That image itself?
Brooke Gladstone: Yes.
Avi Asher-Schapiro: Wow.
Brooke Gladstone: Wasn't that repurposed everywhere from India to the Middle East?
Avi Asher-Schapiro: Yes. It tends to crop up during crises, opportunistic people put it online to try to generate interest. You asked the question, didn't Musk take stuff down? I have no idea why that post doesn't exist anymore [chuckles] on X. There's a real lack of transparency about what's going on and how they're tackling these issues. There's all sorts of rules.
The company has stopped issuing transparency reports, which are the tools that we as reporters would use to parse how policies were enforced, and they've threatened to sue independent researchers who have tried to measure the spread of hateful content on the website. It is a bit of a black box as to what's going on, and we are reduced to looking at the feed ourselves and saying, "Wow, there's a lot of crazy stuff on here that doesn't look true."
Brooke Gladstone: It's weird, isn't it? Because there's so much true stuff that's horrible enough.
Avi Asher-Schapiro: Yes, that's what I've been saying over and over again. I mean, isn't there enough bone chilling images and video to go around? One of the things you have to understand is that Musk has significantly changed the incentives for how people can use his platform. Before he took over, there wasn't a route to make money as a creator on Twitter. By creating this verification scheme where you pay for verification, which allows you to get more reach and get injected in people's algorithmic feed, and then you can get paid out on the other side. If you have a viral tweet, you can get a share of the revenue.
He has created the conditions to incentivize some potentially very unsavory behavior in a moment like this. The question is, has he created the parallel institutions, rules, hired the staff to guard against the worst externalities of that kind of economic [unintelligible 00:04:27] on the platform? I don't know. We know that he fired half of the staff when he took over. We know he's steadily ground down the trust and safety and other teams that are supposedly tasked with doing this kind of work. He just completely axed huge content moderation teams, people who had language expertise.
Before Musk took over, Twitter was a hybrid platform. They had hundreds of staffers who were doing editorial style functions. There'd be something happening, they would create a moment, they would create these carousels where they put authoritative sources around a certain issue. All of that's gone. It's been outsourced to this thing called Community Notes. Musk is trying to do what they used to pay people to do with volunteers. Now they have people who volunteer to append labels to tweets. There's stuff that's really good about that, it's more democratic, but you had a great piece earlier in this week by NBC's tech team that got inside of the Notes program and saw that just they were overwhelmed.
They didn't have enough volunteers, there wasn't professional staff doing the work, and meanwhile, they were racking up hundreds of thousands of views, making claims about churches being destroyed that weren't, about military aid being provisioned that wasn't, and this was all while this beleaguered team of volunteers who are now on the front lines of this are made to label this at the pace that they can manage it.
Brooke Gladstone: To recap, Twitter was not a platform where you could monetize your engagement by clicks. Now it is. Other platforms have done that like YouTube, and apparently they had a stiff learning curve in figuring out which accounts could be monetized under what circumstances and so forth. Do we have any clear guidelines from X about when you can and how you can monetize or when they will demonetize your account?
Avi Asher-Schapiro: They have a page on their website which is called Creator Monetization Standards, which does lay out all of the different things that you can do to lose your monetization privileges. [chuckles] They have all sorts of things. If you're promoting a pyramid scheme, literally they have a section that says, if you're promoting miracle cures, you could lose your monetization.
For me, the key question here is, what kind of architecture has Twitter built around its monetization program to actually create good incentives for people who want to make money on the platform to not go viral posting demonstrably false information or titillating information that's misleading? I just don't know. I think they have an obligation to tell journalists and tell the public, to demonstrate that they're taking those rules seriously.
Brooke Gladstone: Over the weekend, Musk tweeted to his 150 million followers that they should follow two accounts for updated information on the conflict. He was giving guidance. One of them was to a place called @WarMonitors, and the other was @sentdefender. These are two accounts known for spewing lies, and although he eventually took down the tweet, 11 million people ended up seeing it. Tell me, what kind of hard truths are you deriving from the propping up of so-called citizen journalists?
Avi Asher-Schapiro: There's been a couple instances where people have found strange anti-Semitic things that one of the accounts has said, and then I think they had made mistakes in the past where they've claimed certain things had happened and they were wrong. What Musk has done is said that Twitter put its thumbs on the scale in the past, that the people, the executives and the people who worked at the company had certain political biases that they were not being honest about and that he has ushered in a new era of openness and democratic horizontalism on the platform, but really what he's done is he has just replaced it with a new set of preferences that revolve around him.
You'll see that in moments like this, where he's like, "What am I finding interesting on the internet? Let me recommend it." It's a very personalist approach to ruling the platform. You see it in his unilateral decisions about rules, but then you also see it in more subtle ways around introducing algorithmic feeds. In the past, Twitter had human beings curating moments and authoritative sources. Now they're using algorithms. Is that really fairer or better? No, it's like he's come up with a different way of displaying you information that has its own set of pitfalls.
As we were talking about earlier, I don't really know why a certain thing is in my feed. It's a verified account that I don't follow, I've never looked at before. Twitter has decided to inject it in my feed. That's an editorial decision they've made.
Brooke Gladstone: There's no accountability. I mean, that's the difference between his brand of citizen journalism, I guess, and what responsible news outlets do.
Avi Asher-Schapiro: I want to be clear about one thing that I really think is important to underline, which is that Twitter before Musk had a lot of problems. As someone who reported on the platform and tried to bring accountability to the platform, there are certain things they hid that were particularly frustrating to me. For example, although they released transparency reports, they didn't release information about when a government would contact them and ask them to take down information that was violative of their terms of service, and that was a real problem.
You have to understand that these platforms, it's not like there was this perfect thing that Musk came in and threw a wrecking ball into, but that being said, he has taken a lot of steps that have made it even harder to assess the social impact of his platform. I think threatening to sue researchers who try to collect information about the spread of hateful speech is really chilling. How are you meant to keep tabs on this place that is run by the richest man in the world who seems to have a trigger finger on defamation lawsuits?
Brooke Gladstone: How did X respond to inquiries about all the disinformation proliferating on the platform around Israel and Hamas? How did this response compare to Twitter's initial response to the flood of information in the aftermath of Russia's invasion of Ukraine back in February '22?
Avi Asher-Schapiro: I've been speaking to people at Twitter who were on the front lines when the company was trying to respond to the initial days of the Ukraine crisis. You see a totally different posture. They had human rights lawyers on staff, they had Trust and Safety Council of NGOs and groups around the world with experts they were consulting about how to make these decisions. They released some groundbreaking new rules around images of prisoners of war where they were trying to apply international humanitarian law to how the company was dealing with images coming out of the conflict.
I'm sure they made a lot of mistakes. I'm sure that there is plenty of things you could point out, but it was a very different posture. At the moment now, there's very little information coming out of the company. They've tweeted some long, mega-length tweets saying that they're taking this seriously and that they have staff looking at stuff. They don't have the same level of granular detail, long blog posts that the company's policy teams were releasing in the early days of the Ukraine war where they were trying to communicate to the public how they were going to handle information coming out of the conflicts.
Brooke Gladstone: Then how do more responsible actual news outlets try to staunch the flow of false wartime videos and images?
Avi Asher-Schapiro: There's ways that you can basically use satellite imagery, you can use mapping technology, you can use metadata, that you can look at an image that's being posted online and say with a reasonable degree of certainty, "Where was this taken?" using context clues, looking at images in the background and say, "This is actually from Gaza.
This is not from Guatemala." You can look at the metadata inside of images or inside of videos to get a sense of who might have originally posted it.
Brooke Gladstone: Is that how you figured out that the burning girl was not from Israel?
Avi Asher-Schapiro: No, Brooke, I figured that out by Googling the words burning girl video. The first thing that came up was a CNN story from 2015 that said, "Here is a viral video of this terrible thing that happened in Guatemala." It was a two-second thing that whoever had posted that, which was a verified account that had thousands of followers, had either not bothered to do, or they themselves had actually reskinned this video and passed it off. I'm not sure the origins of it, but you don't need to be a whiz to debunk some of this stuff. It's really about the investigative power of it.
You've seen these investigations. They've done incredible-- I think it was the post that did an incredible recreation of the killing of Shireen Abu Akleh, the Palestinian reporter in the West Bank recently, which showed definitively that it was an Israeli sniper bullet.
Brooke Gladstone: Even though the Israeli army denied it.
Avi Asher-Schapiro: Right, and how did they do it? There's these firms like Forensic Architecture who can go and recreate in digital form these streets and the ballistics. Then you pair it with images taken from the time. There are amazing investigative tools, and the fact that newsrooms are putting these kinds of people into the field, does all of us a service because I think we ultimately will have a clear-eyed sense of what's going on in Israel, Palestine.
Brooke Gladstone: If we can stipulate that we've seen the utter failure of X during this crisis, what are we missing out on? Because there are other sites, Mastodon, Bluesky, they've tried, none obviously have risen yet, certainly, to the influence that Twitter has had in terms of being a springboard for awareness and protest and pure information.
Avi Asher-Schapiro: One of the trends that people are talking a lot about is the TikTok vacation or the discordification of social media, that this era of an open platform where you could search around and you curate your own experience and people would post to the world. You would go figure out what you wanted to follow, that it's ending. Then what's replacing it is much more either algorithmically driven places like TikTok where you just turn it on and strap in. You're like, "Show me what you got," or these places like Discord and Telegram, which are closed communities, which are not searchable in the same way where people cousin themselves off into different little groups and share there.
I think it's possible we are missing this era that I think had a lot of positive externalities. People got to choose a little bit about what they saw. They could search widely around the world and learn about things. There was an openness to the design. I don't think people are going to design like that anymore.
Brooke Gladstone: What do you think are the consequences of an even greater amount of misinformation than usual on this platform in the context of this conflict?
Avi Asher-Schapiro: What's happening right now is the creation in real time of a historical record of this terrible, terrible bloody conflict, and flooding the zone with BS doesn't help anybody. There's always been a fog of war, but an algorithmically driven fog of war that actually injects potentially false information in front of our eyes as we scroll through the internet is a different level of dystopian thinking. An algorithmically driven fog of war does a level of disservice to the public discourse that we haven't seen before.
Brooke Gladstone: Avi, thank you very much.
Avi Asher-Schapiro: Thank you, Brooke.
Brooke Gladstone: Avi Asher-Shapiro covers tech for the Thomson Reuters Foundation. Thanks for listening to OTM's midweek podcast. Please check out The Big Show, which posts on Friday. It's the final part of our three-part collaboration with ProPublica called We Don't Talk About Leonard. It's about the conservative movement's well-funded effort to take over America's courts and the man at the center of it all. You can find all three parts, of course, wherever you get your podcasts.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.