Fake AI Satellite Imagery Spurs US-Iran War Disinformation, Experts Warn

SMW Media Team
5 Min Read

The satellite image posted by an Iranian news outlet looked chillingly real: a devastated US military base in Qatar, with buildings in ruins. But it was an AI-generated fake, a stark illustration of how generative artificial intelligence is turbocharging disinformation during the escalating West Asia conflict.

The rise of tools capable of creating convincing synthetic imagery has given state actors and propagandists a powerful new weapon, blurring the lines between reality and fiction in ways that researchers warn carry serious real-world security implications.

The Case of the Destroyed US Base

On X (formerly Twitter), the Tehran Times, a state-aligned English-language daily, posted a “before vs. after” image. It claimed to show a US base in Qatar that had been “completely destroyed.”

However, open-source intelligence (OSINT) researchers quickly debunked the image. Analysis revealed it was an AI-manipulated version of a Google Earth image from the previous year, which actually showed a US base in Bahrain.

Subtle visual giveaways included a row of cars parked in identical positions in both the authentic satellite photo and the manipulated version—a telltale sign of digital tampering.

Despite these clues, the manipulated photo garnered millions of views as it spread across social media platforms in multiple languages, demonstrating how users are increasingly struggling to distinguish reality from AI-generated fiction.

Experts on the ‘Increase in Manipulated Imagery’

Brady Africk, an open-source intelligence researcher, noted a distinct “increase in manipulated satellite imagery” appearing on social media following major events in the Middle East war.

“Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don’t align with reality,” Africk explained. “Others appear to be an image manipulated manually, often by superimposing indicators of damage or another change on a satellite image that had no such details to begin with.”

Another Fake: Targeting Iranian Aircraft

The disinformation is flowing in multiple directions. Information warfare analyst Tal Hagin flagged another AI-generated satellite image that purported to show Israeli-US jets had successfully targeted the painted silhouette of an aircraft on the ground in Iran, while Tehran had seemingly moved its real planes elsewhere.

The telltale clues in this image included gibberish coordinates embedded in the fake. Furthermore, AFP detected a SynthID, an invisible watermark meant to identify images created using Google AI, confirming its synthetic origin. The image spread rapidly across Instagram, Threads, and X.

The Threat to OSINT and Real-World Consequences

This wave of fabricated satellite imagery is also undermining the field of open-source intelligence itself.

“Due to the fog of war, it can be very difficult to determine the success of an adversary’s strikes. OSINT came as a solution, using public satellite imagery to circumvent the censorship” inside countries like Iran, Hagin said. “But it’s now being preyed upon by disinformation agents.”

The consequences extend beyond just misleading the public. As Africk warned, “Manipulated satellite imagery, like other forms of misinformation, can have real-world impacts when people act on the information they come across without verifying its authenticity. This can have effects that range from influencing public opinion on a major issue, like whether or not a country should engage in conflict, to impacting financial markets.”

The Importance of Verification

In this new age of AI-enabled conflict, authentic high-resolution satellite imagery collected in real time is more critical than ever to give decision-makers vital clues and debunk falsehoods.

Satellite intelligence companies are playing a key role. For instance, during a recent militant attack on Niamey airport in Niger, Vantor, a satellite intelligence company, used its own imagery to confirm that photos circulating online purporting to show the main terminal on fire were almost certainly AI-generated fakes.

“When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events,” said Bo Zhao from the University of Washington. As AI-generated imagery grows increasingly convincing, it is “important for the public to approach such visual content with caution and critical awareness.”

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *