The study, titled "Ballistic Fakes: Disinformation and Fact-Checking Efforts during the Israel-Iran War", analyzed the work of fact-checking organizations in 23 countries.
Among the most active were Misbar in Jordan, AFP in France, Newschecker in India and Teyit in Turkey — each contributing a significant volume of war-related fact-checks.
In assessing the motivations behind the spread of false information, the IIA asked which party might gain from the dissemination of misinformation and coded each answer.
It found that in 72% of cases, the content was likely to serve Iranian strategic narratives, while 24% appeared to support the Israeli side. The remaining 4% could not be clearly classified, the research said.
Video content accounted for 85% of all disinformation reviewed. Of this, 82% was found to be outdated, 68% misattributed geographically, and 77% presented with false contextual framing. 17% of video content was generated using artificial intelligence, and 12% was entirely fabricated.
“A large proportion of the fact-checks (71%) concerned false connections and decontextualized content,” the report said. “In these instances, genuine material was accompanied by misleading captions or narrative framing, producing an effect contrary to the original meaning.”
One example cited was a demonstration in San Diego during the war. While the event was accurately located and timed, it was misrepresented online as a protest related to the Israel–Iran conflict. It was images from a domestic protest known as "No Kings" against US President Donald Trump's domestic policies.
Other examples included an image of a hotel fire in China from 2009 portrayed as the aftermath of an Iranian missile strike in Israel as well as footage of an Israeli strike in Lebanon circulated as evidence of Iranian attacks.
Generative AI
In total, the report found that a fifth of fact-checked items were created using generative AI, primarily fabricated images and videos. These included depictions of destroyed Israeli infrastructure, the downing of Israeli aircraft, an Israeli soldier allegedly surrendering and fabricated scenes of domestic anti-war protests.
Fabricated claims, entirely invented events, actions or quotes, made up 15% of the sample. These included false reports of Israeli aircraft crashes in Iran, captured Israeli pilots, and nuclear attacks by both sides.
An additional 2% of items involved impersonation. These included a forged resignation letter from Iran’s president and a deepfake video purporting to show Russian President Vladimir Putin declaring support for Iran.
The most common theme in disinformation content concerned physical damage to infrastructure, featuring in 44% of the items reviewed.
False reports included alleged Iranian missile strikes on Tel Aviv’s Azrieli Towers and Israeli attacks on Tehran’s airport.
Many items focused on explosions, often accompanied by misattributed images or video clips. One widely circulated example included a fabricated report of an attack on the Fordow nuclear site, while another falsely described an explosion in Haifa Bay.