A growing wave of artificial intelligence-generated videos and images about the ongoing conflict involving Iran, the United States and Israel is spreading widely on social media, raising serious concerns about misinformation and the monetisation of false content online, experts say.
Analysts monitoring online platforms say AI-generated videos, fabricated satellite images and manipulated visuals related to the war have collectively attracted hundreds of millions of views across social media networks.
According to researchers, advances in generative AI tools have made it much easier and cheaper to produce realistic-looking conflict footage. Digital media expert Timothy Graham said the scale of misinformation linked to the war is “alarming”.
“What previously required professional production can now be done within minutes using AI tools,” he said, noting that the barrier to creating convincing fake war footage has largely disappeared.
The conflict escalated after the United States and Israel launched strikes on Iran on February 28. Iran later responded with drone and missile attacks targeting Israel as well as several Gulf countries and US military facilities in the region.
As the conflict intensified, social media users increasingly turned to online platforms for updates. However, the demand for fast information has also allowed misleading AI-generated content to spread rapidly.
The platform X recently announced that it would temporarily suspend creators from its monetisation programme if they share AI-generated war videos without clearly labeling them. The programme allows eligible users to earn revenue based on engagement such as views, shares and comments.
Researcher Mahsa Alimardani described the decision as an indication that platforms are beginning to recognise the seriousness of the problem.
Investigations have uncovered several widely circulated AI-generated clips. One example appeared to show missiles striking Tel Aviv in Israel, accompanied by the sound of explosions. The video was shared hundreds of times across social media platforms.
In several cases, users asked the AI chatbot Grok to verify the footage, but the system incorrectly identified the fabricated clip as real.
Another viral AI video falsely showed the Burj Khalifa engulfed in flames while crowds ran toward the building. The video gained tens of millions of views at a time when people in the region were already concerned about possible missile and drone strikes.
Experts say such misinformation damages public trust and complicates efforts to verify genuine evidence from conflict zones.
BBC Verify also identified fabricated satellite images circulating online. One widely shared image claimed to show major damage to the US Navy’s Fifth Fleet headquarters in Bahrain following Iranian strikes. However, investigators found the picture was manipulated using AI based on an earlier satellite image taken in February 2025.
According to generative AI specialist Henry Ajder, the rapid expansion of AI tools — including platforms like OpenAI Sora — has made sophisticated digital manipulation easier than ever.
Technology policy expert Victoire Rio said automated tools now allow creators to produce and distribute AI content across social media platforms almost instantly.
Some experts also warn that monetisation systems on social media platforms may be contributing to the spread of misinformation. Accounts that post viral content can earn revenue through engagement-based programmes.
Graham estimates that the monetisation programme on X could pay roughly $8 to $12 for every one million verified user impressions, provided creators meet certain engagement thresholds.
“Once someone is eligible, viral AI-generated content can effectively become a money-making machine,” he said.
Despite efforts by major platforms to improve moderation and detection systems, experts say tackling AI-driven misinformation remains extremely challenging as the technology becomes more accessible and powerful.
