Portent of things to come: AI-generated image of Pentagon explosion goes viral, triggers market downturn
By lauraharris // 2023-06-01
 
An image purportedly showing an explosion at the headquarters of the U.S. Department of Defense (DoD) in Virginia triggered a frenzy of turmoil on social media and caused a market downturn. However, the said image was found to be a fake one generated using artificial intelligence (AI). Several news outlets tweeted the image, among them Russia's RT and India's Republic TV. The Russian outlet initially captioned it as an actual explosion, before deleting its tweet. It later wrote: "Republic TV had aired news of a possible explosion near the Pentagon, citing a post [and] picture tweeted by RT. RT has deleted the post and Republic TV has pulled back the newsbreak." A May 23 press release from the Russian state-owned network expounded: "As with fast-paced news verification, we made the public aware of reports circulating. Once provenance and veracity were ascertained, we took appropriate steps to correct the reporting." While news outlets were duped by the fake image that went viral on Twitter, several markets experienced downturns thanks to the fabricated picture. The Kobeissi Letter noted that within hours of multiple news sources claiming the image of the explosion was real, the S&P 500 experienced a 30-point drop within minutes. This resulted in a $500 billion swing in market capitalization. The Dow Jones Industrial Average fell about 80 points within a four-minute span after the image went viral, but managed to get back up again. Things only returned to normal when the Arlington County Fire Department debunked the image and announced that no explosion or incident occurred at or near the DoD compound. A deputy officer for the Defense Department confirmed that the image circulating on social media was fake, but declined to give further comment.

AI-generated fakes will FOOL EVERYONE

A finger-pointing contest soon followed after the truth about the AI-generated image of the explosion was debunked. Some users blamed the advanced technology for generating the fake image. Others, meanwhile, criticized accounts that shared the image without verifying its authenticity. Nick Waters of the online news verification company Bellingcat explained how the image could be identified as fake. Speaking to Al Jazeera, he shared several inconsistencies in the AI-generated picture. (Related: AI learns to cut corners by hiding data – modern tech now knows how to LIE and CHEAT.) First, he noted inconsistencies in the Pentagon building's facade. Waters said the fence merged with the crowd barriers. He also cited the odd-looking floating lamp post and black pole protruding from the pavement. Second, he noted that no other photos, videos or firsthand witnesses were corroborating the alleged explosion. Waters ultimately emphasized the importance of geolocation and social media searches to identify such fakes. The combination of AI-powered deep fake technology creating realistic hoaxes – such as the fake image of the Pentagon explosion – is much more dangerous than the advancement of the actual technology. Moreover, this ability of AI to bombard information channels with propaganda and disinformation is equally dangerous. RT put its two cents on the incident in a post on the Russian social media platform VKontakte. The post sought to shed light on its mistake of re-tweeting the now-disproved image of the explosion. "Is the Pentagon on fire? Look, there's a picture and everything. It's not real, it's just an AI-generated image," the Russian state-owned outlet wrote. "Still, this picture managed to fool several major news outlets full of clever and attractive people, allegedly." Watch this video explaining how the fake Pentagon explosion photo exposes government propaganda efforts. This video is from the InfoWars channel on Brighteon.com.

More related stories:

AI robot can draw what you're thinking by reading your brain impulses.

AI is the "enabling technology" for the coming global surveillance state… you will be watched by artificial intelligence.

Creepy Google AI targeted father, falsely accused him of child sex crimes following private conversation between him and doctor about son’s groin problems.

Dangerous new identity precedents being set by AI; soon we won't know the difference between who's real or who's fake.

Sources include:  DailyMail.co.uk AlJazeera.com Brighteon.com