As AI images flood the internet, can these tools help you tell what's real?
CBC News put some popular and free AI image detector tools to the test and got some mixed results

AI image detectors promise to help cut through the flood of online content to help determine what's real and what's not — but can you rely on them to get it right?
Many people now regularly turn to AI detectors to help determine if viral images are AI-generated. And as people increasingly become skeptical of even real images, those tools are in greater demand.
CBC News tested five commonly-recommended free AI image detectors to see if they were accurately able to estimate whether an image was real or AI-generated. These included: AI or Not, Illuminarty, Hive Moderation, Is it AI? and a tool hosted on the open source machine learning platform Hugging Face. While many of these AI detector services have a free tier, some have pricing plans that can cost hundreds of dollars each year.
The CBC's visual investigations unit ran three photos through each detector twice. The images tested include a real photo of a CBC hallway lined with lockers, a similar-looking AI-generated image, and a lower resolution, compressed version of the AI-generated image that was posted to the social media platform X.
In the first test, AI or Not and Hive accurately labelled all three images. Illuminarty and Is it AI? got all three wrong. The detector hosted on Hugging Face accurately labelled the AI-generated images, but thought the real image was fake.
In the second test, the results were the same, except this time Is it AI? labelled the real image correctly, but still got the AI-generated images wrong.
"It really does depend on which system you're looking at," said Ben Y. Zhao, a computer science professor at the University of Chicago.


Tests produced mixed results
CBC News reached out to all five AI detector companies for comment.
The CEOs of AI or Not and Hive both said their AI detectors are searching for patterns invisible to the naked eye.
"While some AI-generated images may be easier to identify by common errors — like extra fingers on someone's hand, unrealistic reflections and shadows, or the mismatched scale of objects — the sophistication of AI-generated imagery is moving beyond the threshold of human detection," said Hive CEO Kevin Guo in a statement.
AI or Not CEO Anatoly Kvitnitsky says their tool looks for "pixel-level patterns of content."
The creator of the AI detector on Hugging Face, developer Colby Brown, says AI detection is still worth pursuing, even if it gets some images wrong.
"User caution is needed," Brown said in a statement. "Individual images can fool such detectors even if they have reasonable accuracy on a larger sample (or feed) of images."
The team at Is It AI? said in a statement that the test CBC News performed highlights "the ongoing challenge that AI image detectors face" as the technology develops.
They also said that their tool "covers a wide range of domains and typically requires a larger and more diverse dataset to assess accuracy comprehensively."
As AI image generators are continuously improving, so are detectors. Is It AI? said the company will soon release a new detection tool with "substantial improvements" in accuracy. Brown also said that he may develop a new and more advanced tool.
Illuminarty didn't respond to CBC's requests for comment.
Zhao says some AI detectors are better than others.
"Some of them are trained on millions of images that allow them to do a better job with discerning the differences," he said.
He noted that bad actors can even use AI image detectors to iterate and fine-tune fake images that would then be labelled as real.
"I think the real danger is really to a lot of the folks who are not in a situation where they expect" to be targetted by AI-generated fakes, Zhao said, noting these are usually people who aren't as familiar with technology. "They're going to be easier targets for scammers and phishing scams and different kinds of things."
Zhao says that old tricks for detecting AI images are becoming less reliable. Famously, early iterations of AI image generators had trouble mimicking human hands, but he says that's not the case anymore.
Still, AI image generators don't get everything right, and a trained eye can often pick out details that clearly indicate AI was used.
The AI image CBC News used in the test can be identified as fake with the naked eye. The lockers in the hallway have locks that are warped and blurred, for example. The overhead lights have no fixtures, a panel on the ceiling has a line running through it, and there appear to be far too many lockers for the amount of space shown in the photo.

Zhao says that when people are trying to tell the difference between a real photo and an AI-generated image, thinking through the details is important.
"Does it make sense for the button to be placed in this way? Does it make sense for the hair to blend in with the turtle neck that way? Those kinds of smaller details are really still tricky for models to get right," Zhao said.
Methodology:
During testing, CBC News sought to mimic the experience of a member of the general public. We chose five free popular online AI image detectors. We chose five free popular online AI image detectors by tallying the number of recommendations from lists featured on the first five pages of Google search results and chose the top five of those services.
Three images were tested: a real photo taken by CBC reporters, an AI-generated image and a compressed version of the same AI image that was posted to X and downloaded again. The five detectors were then scored as correct or incorrect based on whether they accurately assessed whether the images were more likely created by a human or AI. The test was run twice.
The AI photo was generated from Google's Gemini AI with the prompt: "Create an image of a hallway with blue lockers filling half of the hallway on the left, grey checkered carpet and light orange wall on the right, and white hallway on the left back. A red wall is at the end of the hallway."