More evidence on how social media works to promote Islamic radicalization — while suppressing its victims — recently emerged. According to a Feb. 20, 2023 report, "bombshell findings" by the Tech Transparency Project (TTP) allege that
Facebook created over 100 pages for ISIS (Islamic State), as well as pages for other terror organizations, including the group behind the 9/11 attacks on the U.S., Al-Qaeda.
TTP reported that Facebook creates the pages based on its algorithm, automatically generating them when users add the terror groups to their profiles. The platform's so-called ban on the groups apparently did little to prevent the automatic process that generated the terror group pages.
"Some of these automatically generated pages have been living on Facebook for years, racking up likes and posts with terrorist propaganda and imagery," reported the Jerusalem Post in its coverage of the TTP's findings. "The company could potentially be held responsible for these pages as Facebook not just hosting but actually creating them."
This is only the latest chapter in Facebook's struggles to keep hate off its platform.
Indeed it is. For example, according to a Jun. 14, 2022 report,
[a] new study has found that Facebook has failed to catch Islamic State group and al-Shabab extremist content in posts aimed at East Africa as the region remains under threat from violent attacks. ... [Facebook] repeatedly failed to act on sensitive content including hate speech in many places around the world.
Posts calling for violence and murder "in languages including Swahili, Somali and Arabic — were allowed to be widely shared."
Responding to these (at the time) shocking findings, Leah Kimathi, a Kenyan consultant in governance, peace, and security, said, "The least they [Facebook] can do is ensure that something they're selling to us is not going to kill us."
Similarly, "Why are they not acting on rampant content put up by [the Islamic terrorist group] al-Shabab?" asked Moustafa Ayad, who worked on the report. "You'd think that after 20 years of dealing with al-Qaida, they'd have a good understanding of the language they [jihadists] use, the symbolism."
Yes, you'd think.
Another report, from Dec. 2021, found that
Facebook allowed photos of beheadings and violent hate speech from ISIS and the Taliban to be tagged as "insightful" and "engaging[.]" ... Extremists have turned to the social media platform as a weapon "to promote their hate-filled agenda and rally supporters" on hundreds of groups[.] ... These groups have sprouted up across the platform over the last 18 months and vary in size from a few hundred to tens of thousands of members, the review found. One pro-Taliban group created in spring this year and had grown to 107,000 members before it was deleted[.] ... Overall, extremist content is "routinely getting through the net," despite claims from Meta — the company that owns Facebook — that it's cracking down on extremists. There were reportedly "scores of groups" allowed to operate on Facebook that were supportive of either Islamic State or the Taliban, according to a new report.
In the summer of 2022, a Muslim man in the U.K. was found guilty of sharing propaganda videos that glorified Islamic terrorists, including videos made by the Islamic State. Where did he share them with impunity? On Facebook and other social media.
Needless to say, this issue is significantly worse when one considers non-English and non-European-language content. Over the years, I've personally seen extensive Arabic-language content on Facebook and other social media giants that amounts to nothing less than terroristic incitement. Usually, these posts remain on social media platforms for years — until I or others draw attention to them in English-language articles, at which point they are conveniently removed.
In other words, as long as only Muslims see — and are radicalized by — these posts full of hatred and incitement to violence against non-Muslims, social media tend to leave them up. Once a Western audience learns about these posts, which make both Islam and social media look bad, they are taken down.
This is not always the case. For example, in December 2021, I translated an immensely profane and hate-filled Arabic-language tirade from a New York-based Muslim man against two Christian men from Egypt — a rant that culminates in him loudly threatening decapitation to anyone who "hurts the reputation of Muhammad." This video, which currently has nearly 115,000 views, is, apparently because it's only in Arabic, still up on YouTube, though the "warning" that "this video may be inappropriate for some users" now accompanies it.
On the other hand, and despite the leniency shown to Islamic terrorist content, social media, especially Facebook, are notoriously quick to censor content that exposes the jihadists. This it calls "hate speech" or "offensive content." In one especially stark example, Facebook censored the campaign of a charity that sought to draw attention to the plight of Christian women in Muslim nations.
I too have been censored by Facebook — and am constantly "shadowbanned" — for posting on the Muslim persecution of Christians.
And while Islamic extremist groups managed to get away with posting "pornographic images" on social media, some U.S. Wi-Fi networks ban my website, which is devoted to the Islamic question, on grounds that it is "pornography."
Such is the true extent of the problem posed by the social media giants: not only do they, as many already know, censor those who say anything that goes against the narrative, in this case by exposing Islamic hate and violence, but they also allow Islamic hate and violence to proliferate and radicalize Muslims, who go on to murder "infidels."
Raymond Ibrahim is the Judith Friedman Rosen Fellow at the Middle East Forum.