Hamas is banned from Facebook, deleted from Instagram and banned from TikTok. Yet messages supporting the group that carried out terrorist attacks in Israel this month continue to reach massive audiences on social media, broadcasting horrific images and political messages to millions.
Several pro-Hamas accounts have gained hundreds of thousands of followers on social platforms since the war between Israel and Hamas began on October 7, according to a New York Times study.
An account on Telegram, the popular lightly moderated messaging app, reached more than 1.3 million followers this week, up from around 340,000 before the attacks. This narrative, Gaza Now, is aligned with Hamas, according to the Atlantic Council, a research group focused on international relations.
“We have seen Hamas content on Telegram, such as body camera footage of terrorists shooting Israeli soldiers,” said Jonathan A. Greenblatt, executive director of the Anti-Defamation League. “We saw images not only on Telegram but on other platforms of dead and bloodied soldiers.”
These posts pose the latest challenge for tech companies as many attempt to minimize the spread of false or extremist content while preserving content that does not violate their rules. In past conflicts, like the genocide in Myanmar or other attacks between Palestinians and Israel, social media companies have struggled to strike the right balance, with watchdog groups criticizing their responses as too limited or sometimes overzealous .
Experts said Hamas and Hamas-linked social media accounts are now exploiting these challenges to escape moderation and share their messages.
Most online platforms have long banned terrorist organizations and extremist content. Facebook, Instagram, TikTok, YouTube and
Gaza Now had more than 4.9 million followers on Facebook before it was banned last week, shortly after The Times contacted Meta, Facebook’s parent company, about the account. Gaza Now did not post the kind of horrific content found on Telegram, but it did share accusations of wrongdoing against Israel and encouraged its Facebook followers to subscribe to its Telegram channel.
Gaza Now also had more than 800,000 collective followers on other social media sites before many of those accounts were also removed last week. His YouTube account had more than 50,000 subscribers before it was suspended on Tuesday.
In a statement, a YouTube spokesperson said Gaza Now violated the company’s policies because the channel’s owner previously operated an account on YouTube that had been closed.
Telegram has become the clearest launchpad for pro-Hamas messages, experts say. Accounts there shared videos of captured prisoners, dead bodies and destroyed buildings, with followers often responding with the thumbs-up emoji. In one case, users asked each other to upload gruesome images of Israeli civilians being shot to platforms like Facebook, TikTok, Twitter and YouTube. The comments also included suggestions on how to edit the images to make it difficult for social media companies to find and easily remove them.
Telegram also hosts an official account of the Al-Qassam Brigades, the military wing of Hamas. The number of his followers has tripled since the start of the conflict.
Pavel Durov, Telegram’s chief executive, wrote in a post last week that the company had removed “millions of blatantly harmful content from our public platform.” But he said the app would not completely stop Hamas, saying the accounts “provide a unique source of first-hand information for researchers, journalists and fact-checkers.”
“While it would be easy for us to destroy this source of information, it would risk exacerbating an already dire situation,” Mr. Durov wrote.
X, owned by Elon Musk, was flooded with lies and extremist content almost from the start of the conflict. Researchers at the Institute for Strategic Dialogue, a political advocacy group, found that in a 24-hour period, a series of posts on X supporting terrorist activity were viewed more than 16 million times. The European Union said it would examine whether X had violated an EU law requiring major social networks to stop the distribution of harmful content. X did not respond to a request for comment.
Yet accounts not directly claimed by Hamas present thornier challenges for social media companies, and users have criticized the platforms for being overzealous in removing pro-Palestinian content.
Thousands of Palestinian supporters said Facebook and Instagram removed or removed their posts, even when the posts did not violate the platforms’ rules. Others reported that Facebook removed accounts calling for peaceful protests in cities across the United States, including planned sit-ins in the San Francisco area this weekend.
Meta said in a blog post Friday that Facebook may have inadvertently removed some content as it worked to respond to an increase in reports of content violating the site’s policies. Some of these posts were hidden due to an accidental bug in Instagram’s systems that did not display pro-Palestinian content in its Stories feature, the company said.
Masoud Abdulatti, founder of a health services company, MedicalHub, who lives in Amman, Jordan, said Facebook and Instagram blocked his pro-Palestinian posts and he turned to LinkedIn to share his support for civilians in Gaza who were trapped. in the middle of the conflict.
“The people of the world are ignorant of the truth,” Mr. Abdulatti said.
Eman Belacy, an editor who lives in Egypt’s Sharkia governorate, noted that she normally used her LinkedIn account only for professional networking, but began posting about the war after feeling that Facebook and Instagram did not show the full picture of the devastation in Gaza. .
“This may not be the ideal place to share war news, but excuse us, the rise of injustice and hypocrisy is unbearable,” Ms Belacy said.
These challenges reflect the weak content moderation tools that social networks increasingly rely on, said Kathleen Carley, a researcher and professor at Carnegie Mellon University’s CyLab Security and Privacy Institute.
Many companies, she explained, rely on a mix of human moderators – who can be quickly overwhelmed in a crisis – and some computer algorithms, with no coordination between platforms.
“Unless you moderate content consistently, for the same story across all major platforms, you’re just playing Whac-a-Mole,” Ms. Carley said. “It’s going to resurface.”
Sheera Frenkel reports contributed.