Iran War, Algorithms and the Age of Synthetic Panic

Opinion

By Mohd Fahad

The writer is Editor (National Desk) at First India

Modern wars are fought not only with missiles and drones but also with algorithms, artificial intelligence and viral videos. The recent escalation between the United States, Israel and Iran in West Asia demonstrated how the battlefield now extends far beyond physical geography. Alongside the real conflict unfolding across the region, a parallel war erupted on social media—one dominated by fake visuals, AI-generated clips and recycled footage that triggered panic and hysteria among millions of people worldwide.
Within hours of the first reports of military strikes, social media platforms such as X, TikTok, Instagram and YouTube were flooded with dramatic visuals allegedly showing missile attacks, collapsing buildings and burning cities. Many of these clips looked realistic enough to deceive ordinary viewers. But investigations by fact-checkers revealed that a large number of them were either fabricated using generative AI tools or recycled from older incidents unrelated to the ongoing conflict.
The phenomenon highlights a dangerous new dimension of misinformation: the ability of artificial intelligence to manufacture convincing war footage within minutes.

The “burning Burj Khalifa” panic
One of the most widely circulated fake videos showed Dubai’s iconic Burj Khalifa engulfed in flames after an alleged Iranian missile strike. The clip spread rapidly across Facebook, X and WhatsApp groups, accumulating millions of views. In several versions of the video, sirens could be heard while the skyscraper appeared to collapse in slow motion.
However, investigators later confirmed that the video was completely AI-generated. There was no such attack on the building, and the visuals were digitally fabricated. Despite this, the video circulated widely across the Gulf region and beyond, alarming residents and their families overseas.
For expatriate communities—particularly millions of South Asians working in the UAE—the video created immediate anxiety. Families in India, Pakistan and Bangladesh frantically called relatives in Dubai to check if they were safe.

Fake missile strikes on Tel Aviv
Another viral clip claimed to show Iranian ballistic missiles striking residential neighbourhoods in Tel Aviv. The footage depicted massive explosions and apartment buildings collapsing in quick succession. The video spread across multiple platforms and was even shared by some influential accounts before being verified.
Experts later identified clear signs of AI manipulation. Rooftops appeared duplicated, smoke clouds were unnaturally coloured, and there were no emergency sirens or ambient sounds typical of real attacks. The clip was eventually confirmed to be AI-generated.
Yet by the time fact-checkers debunked it, the video had already been viewed by millions and contributed to widespread speculation about the scale of Iranian retaliation.

Old videos recycled as “breaking news”
Not all misinformation relied on AI. A significant portion of the viral visuals consisted of old footage repurposed as current war scenes. For example, a video of an explosion at a port in Yemen from July 2024 was circulated as evidence of a fresh Iranian strike on Saudi Arabia. Similarly, a fire in a Sharjah skyscraper from 2015 was falsely presented as a missile attack on Dubai.
Another widely shared clip allegedly showed Iranian missiles hitting Israel during the current conflict. In reality, the footage had been broadcast by regional television networks in June 2024 and had nothing to do with the present escalation.
Such recycled content thrives during crises because audiences are desperate for real-time information. In the absence of verified footage, dramatic visuals—however misleading—fill the information vacuum.

Fabricated emotional narratives
Some misinformation campaigns attempted to manipulate emotions rather than depict explosions. One viral video showed an American soldier breaking down in tears after supposedly witnessing Iranian attacks on US bases in the Gulf. The clip appeared to be recorded in a military barracks and carried subtitles describing heavy casualties.
Fact-checkers later concluded that the video was entirely staged or AI-generated.
These kinds of videos exploit human empathy and can be particularly effective in shaping public opinion about wars.

Synthetic protests and political manipulation
AI was also used to fabricate political scenes. A video claiming to show pro-Israel protests in Iran went viral on TikTok and other platforms. Demonstrators appeared to wave Israeli flags and chant slogans demanding regime change.
Upon closer inspection, analysts discovered numerous anomalies: unnatural hand movements, distorted facial expressions and even incorrect colours on the Iranian flag. The footage had been generated using AI tools.
Such synthetic propaganda attempts to create the illusion of public sentiment where none may exist.

Disinformation networks and viral amplification
The spread of these fake visuals was not always accidental. Researchers and social media companies identified networks of accounts systematically posting AI-generated war videos to attract attention and monetise views.
Some of these posts accumulated hundreds of millions of views before being flagged. In one case, a fabricated video showing an Iranian strike on Dubai received over 200 million views on Facebook.
The incentive structure of social media—where engagement translates into revenue—encourages the rapid production of sensational content regardless of its authenticity.

AI tools: A double-edged sword
Ironically, artificial intelligence also contributed to the spread of misinformation through automated verification tools. Many users attempted to confirm the authenticity of viral videos using chatbots. In several cases, these systems incorrectly labelled AI-generated clips as genuine.
This illustrates a paradox of the digital age: the same technology that enables misinformation is also expected to detect and counter it.

Real-world consequences
The impact of such misinformation is far from trivial. Fake war footage can influence financial markets, diplomatic perceptions and civilian behaviour.
In the Gulf region, rumours triggered by viral videos led to panic buying in some neighbourhoods and heightened anxiety among expatriates. Governments in the region even warned citizens against sharing unverified content online, emphasising that misinformation could threaten public order.
India also witnessed its own media fallout. In an unprecedented step, the government temporarily froze television rating measurements for several weeks. The move was intended to discourage sensationalist war coverage and prevent news channels from broadcasting unverified visuals simply to chase higher ratings.

The erosion of trust
Perhaps the most worrying consequence of the fake-video epidemic is the erosion of trust in genuine journalism. When audiences are repeatedly exposed to manipulated visuals, they begin to doubt even authentic footage.
Experts warn that AI-generated war videos are now reaching hundreds of millions of viewers, polluting the global information ecosystem during critical geopolitical events.
In such an environment, distinguishing truth from fabrication becomes increasingly difficult.

The way forward
The surge of misinformation during the West Asian conflict reveals an urgent need for stronger safeguards in the digital information ecosystem.
First, social media platforms must improve the detection and labelling of AI-generated content. Second, news organisations must resist the temptation to amplify viral visuals without verification. Third, digital literacy among citizens must be strengthened so that users learn to question sensational footage before sharing it.
Wars have always produced propaganda and rumours. But in the age of generative AI, misinformation travels faster, looks more convincing and spreads to a far larger audience.
The conflict in West Asia has therefore exposed a sobering reality: while missiles may devastate cities, synthetic videos have the power to destabilise the truth itself.