This TikTok account pumped out fake war footage with AI — until CBC News investigated

'AI slop' content represents a worrying trend, expert says

Image | flightareazonescreenshot

Caption: A screenshot of videos from the flight_area_zone TikTok account, which has since disappeared from the platform. The account featured videos of AI-generated but realistic-looking explosions and burning cities. (flight_area_zone/TikTok)

For months, an anonymous TikTok account hosted dozens of AI-generated videos of explosions and burning cities. The videos racked up tens of millions of views and fuelled other posts that claimed, falsely, they were real footage of the war in Ukraine.
After CBC News contacted TikTok and the account owner for comment, it disappeared from the platform.
The account, flight_area_zone, had several videos featuring massive explosions which reached millions of viewers. The videos featured hallmarks of AI generation, but lacked any disclaimer as required by TikTok guidelines(external link). TikTok declined to comment about the account.
Several of the videos were spread across different social media platforms by other users, who posted them alongside claims they depicted actual war footage, with several gaining tens of thousands of views. In those posts, some commenters appear to take the videos at face value, and either celebrate or denounce the purported damage, leaving them with an inaccurate sense of the war.

The rise of 'AI slop'

The flight_area_zone account is just one example of a broader trend in social media content, something experts call "AI slop."
It generally refers to content — images, video, text — created using AI. It's often poor quality, sensational or sentimental, in ways that seem designed to generate clicks and engagement.
AI-generated content has become an important factor in online misinformation. A preprint study published online this year, co-authored by Google researchers, showed that AI-generated misinformation quickly became nearly as popular as traditional forms of manipulated media in 2023.
WATCH | Fanning the flames of misinformation:

Media Video | CBC News : Exposing the viral 'AI slop’ that's fuelling online misinformation

Caption: A TikTok account hosting videos of AI-generated explosions that others claimed were in Ukraine was removed from the platform following inquiries from the CBC News visual investigations team. The account shows how low-quality content made with generative AI — known as 'AI slop' — can warp perceptions and fuel misinformation.

Open Full Embed in New Tab (external link)Loading external pages may require significantly more data usage.
In October, for example, a similar-looking video(external link) from a different TikTok account went viral as people claimed it depicted an Israeli strike on Lebanon. The video, showing raging fires in Beirut, was shared widely across social media and by several prominent accounts. In some cases it was packaged along with real videos showing fires in the Lebanese capital, further blurring the line between fake and real news.
Facebook has also seen an influx of AI-generated(external link) content meant to create engagement — clicks, views and more followers — which can generate revenue for its creators.
The flight_zone_area account also had a subscriber function where people could pay for things like unique badges or stickers.
The explosion videos were convincing to some — commenters frequently asked for details about where the explosions were, or expressed joy or dismay at the images. But the videos still had some of the telltale distortions of AI-generated content.

Image | flightareazoneexample - circle

Caption: A screenshot from another of the account's videos, which has signs that it was made by AI, such as the oversized car, circled. (flight_area_zone/TikTok)

Cars and people on the street seem sped up or warped, and there are several other obvious errors — like a car that is far too large. Many of the videos also share identical audio.
The account also featured videos other than burning skylines. In one, a cathedral-like building burns. In another, a rocket explodes outside of a bungalow.
WATCH | Concerns about AI and politics:

Media Video | The National : AI experts urge governments to take action against deepfakes

Caption: Hundreds of technology and artificial intelligence experts are urging governments globally to take immediate action against deepfakes — AI-generated voices, images, and videos of people — which they say are an ongoing threat to society through the spread of mis- and disinformation and could affect the outcome of elections.

Open Full Embed in New Tab (external link)Loading external pages may require significantly more data usage.
Older videos showed AI-generated tornadoes and planes catching on fire — a progression that suggests experimentation with what kind of content would be popular and promote engagement, which can be lucrative.
One problem with AI slop, according to Aimée Morrison, an associate professor and expert in social media at the University of Waterloo, is that much of the audience is not always going to take a second look.
"We've created a culture that's highly dependent on visuals, among a population that isn't really trained on how to look at things," Morrison told CBC News.
A second issue arises with images of war zones, where the reality on the ground is both important and contested.
"It becomes easier to dismiss the actual evidence of atrocities happening in [Ukraine] because so much AI slop content is circulating that some of us then become hesitant to share or believe anything, because we don't want to be called out as having been taken in," Morrison said.
Researchers said earlier this year that AI-generated hate content is also on the rise.
WATCH | AI's effects on the way war works:

Media Video | How is artificial intelligence AI changing the face of modern warfare?

Caption: Get the latest on CBCNews.ca, the CBC News App, and CBC News Network for breaking news and analysis

Open Full Embed in New Tab (external link)Loading external pages may require significantly more data usage.
Separately, UN research warned last year(external link) that AI could supercharge "anti-Semitic, Islamophobic, racist and xenophobic content." Melissa Fleming, the undersecretary-general for global communications, told a UN body that generative AI allowed people to create large quantities of convincing misinformation at a low cost.
Although guidelines for labelling AI-generated content are in place at TikTok and other social media platforms, moderation is still a challenge. There's the sheer volume of content produced, and the fact that machine learning itself is not always a reliable tool(external link) for automatically detecting misleading images.
The flight_area_zone account no longer exists, for example, but an apparently related (but much less popular account), flight_zone_51, is still active.
These limitations often mean users are responsible for reporting content, or adding context, such as with X's "community notes" feature. Users can also report unlabelled AI-generated content on TikTok, and the company does moderate and remove content that violates its community guidelines.
Morrison says the responsibility for flagging AI-generated content is shared both by social media platforms and their users.
"Social media sites ought to require digital watermarking on these things. They ought to do takedowns of stuff that's circulating as misinformation," she said.
"But in general, this culture ought to do a lot better job of training people in critical looking as much as in critical thinking."