X’s Transparency Rules Expose a Synthetic Gaza Disinformation Network

Accounts Falsely Presented Themselves as Civilians in Gaza, Posting Emotive Casualty Claims and Siege Narratives

A new location-transparency requirement on X has revealed fake accounts relating to the Gaza war.

A new location-transparency requirement on X has revealed fake accounts relating to the Gaza war.

Shutterstock

X’s new location-transparency requirement has reshaped the information environment surrounding the Gaza war. After the platform introduced the policy, accounts that had long claimed to report from Gaza displayed locations in Europe, North America, and Turkey. These accounts produced much of the imagery and narrative framing that circulated widely after October 7, 2023. Western journalists, nongovernmental organizations, and policymakers often treated them as front line observers, which gave fraudulent accounts disproportionate influence over public perception and policy debates.

Open Source Intelligence analyst Eitan Fischberger’s November 22, 2025, thread highlighted how X’s new “About This Account” panel first exposed prominent accounts posing as American or local Gaza/Palestinian voices. Fischberger notes that he captured the screenshots himself and urged others to share only accurate examples.

Journalists cited these accounts as eyewitness sources during breaking-news cycles.

The Gaza information space is target for actors seeking to influence foreign audiences. Accounts that presented themselves as civilians in Gaza posted emotive casualty claims and siege narratives. The new transparency rule revealed that many operated from cities such as Warsaw, Berlin, Amsterdam, and Istanbul. These accounts maintained credibility by repeating familiar themes and amplifying one another to create the appearance of consensus. Several shared identical videos or images from unrelated conflicts, and the repetition increased engagement and reach.

Western media outlets accelerated the impact of this ecosystem. Journalists cited these accounts as eyewitness sources during breaking-news cycles. Nongovernmental organizations incorporated and echoed posts from them in emergency situational reports. These narratives didn’t stay on fringe accounts. Members of Congress amplified them—for example, Rep. Ilhan Omar (D-Minn.) reshared a miscaptioned Syria photo as “Gaza genocide” before deleting it—and then carried casualty figures from the Hamas-run Gaza Health Ministry into House speeches, hearings, and ceasefire proposals. The result was a commentary environment in which unverified accounts—sometimes operating thousands of miles from Gaza—shaped the discourse more than professional reporting.

Open-source analysis reveals several recurring patterns. One account that frequently announced broadcasts from Rafah displayed a European location tag immediately after the transparency change. Another that described Israeli operations in real time was posting from different foreign locations, suggesting the use of obfuscation tools. Several videos that circulated as evidence of Israeli strikes originated from Syria or earlier conflicts. These recycled images spread because audiences reacted to their emotional framing rather than their metadata or provenance.

The structure of this network aligns with broader features of the modern media environment. Newsrooms seek rapid content during crises and often draw material from social-media sources before verification. Non-governmental organizations fill information gaps with viral posts that appear to support long-standing narratives. Policymakers respond to perceived public pressure rather than confirmed reporting. Synthetic accounts understand these incentives and produce content designed to meet them. The result is an information space in which misleading claims gain traction before correction mechanisms engage.

These accounts focus on themes—hunger, displacement, bombardment—that provoke immediate moral reactions.

The power of synthetic Gaza accounts also reflects Western cognitive vulnerabilities. These accounts focus on themes—hunger, displacement, bombardment—that provoke immediate moral reactions. The framing encourages audiences to assume authenticity even when indicators point elsewhere. Once a claim enters mainstream conversation, corrections rarely reverse its influence. Narratives take hold when they align with preexisting expectations in Western institutions.

Policy consequences follow. Legislators cite unverified posts during debates on foreign aid, sanctions, and ceasefire resolutions. Governments incorporate social-media material into diplomatic statements or emergency briefings. Advocacy groups use viral posts to support allegations of siege tactics or war crimes. These decisions rest on narratives shaped by actors who neither operate in Gaza nor disclose their affiliations.

Several steps can mitigate the influence of synthetic reporting. Platforms can require accounts that purport to report from conflict zones to verify regional presence using consistent metadata, established affiliations, or third-party validation. Governments can treat chronic disinformation activity as a form of influence operation rather than ordinary user expression. Journalists can cite only posts with traceable provenance and link primary sources instead of unverified aggregators. Nongovernmental organizations can adopt open-source intelligence review procedures before incorporating social-media content into assessments. These measures do not restrict speech; they clarify provenance and reduce the risk of narrative manipulation.

David E. Firester is the founder and CEO of TRAC Intelligence, LLC, and the author of Failure to Adapt: How Strategic Blindness Undermines Intelligence, Warfare, and Perception (2025).
See more on this Topic
North and South Yemen Are Fundamentally Different Countries with Irreconcilable Outlooks, Despite What Saudi Arabia May Want
A Fragmented Labor Force, Strict Security, and Other Factors Constrain Workers from Mobilizing, Not Apathy or Ignorance
Iranians Plead on Social Media for U.S. Intervention Now, Not Next Week, Which They Warn May Be Too Late