Apple or iPod? Artificial intelligence hoodwinked by handwritten note
AI system called CLIP taught itself how to read — a skill that caused a lot of problems
You would think an AI trained to identify images would know the difference between an apple and an iPod, but it turns out its impressive intelligence can be fooled with old-fashioned pen and paper.
CLIP, an artificial intelligence system designed by research lab OpenAI, was tricked into thinking with almost complete certainty that a Granny Smith apple with a handwritten note reading '"iPod" on it was, in fact, an iPod.
In another example, researchers placed dollar signs on a photo of a poodle to see if it would classify it as a piggy bank.
"Surprisingly, that worked," Gabe Goh, OpenAI researcher and the co-author of a new study on this phenomenon, told As It Happens host Carol Off.
Reading between the lines
CLIP was designed by OpenAI to categorize images and associate them with relevant captions. OpenAI says CLIP outperforms other similar programs, called vision systems, to accurately categorize "sketches, cartoons, and even statues of the objects."
But the researchers recently learned that CLIP goes further than just categorizing basic images like a dog. It thinks conceptually, like a human.
That's because it taught itself to read, Goh said.
"That was one of the big surprises that came out of the model, because it was not explicitly trained to read," Goh said. "It just sort of arose naturally out of this training objective."
With its ability to read, CLIP found itself looking at the world "literally, symbolically, or conceptually," OpenAI said in its research. The problem was, it had a hard time deciphering those categories.
OpenAI compares this to what's known as the "Halle Berry neuron," which scientists first described in 2005. It's a hypothetical neuron, or single cell, that fires in response to one specific thing. For example, one study subject had a neuron that "responds to photographs, sketches, and the text 'Halle Berry,'" but no other names.
CLIP also is able "organize images as a loose semantic collection of ideas," OpenAI said. That's why it was easily fooled by the handwritten note on the apple or a dollar sign on a poodle.
Goh said the "piggy bank neuron" was especially interesting and fired for various things related to finance, like the dollar signs or the word "cheap."
"Somehow it has learned to associate these concepts," Goh said. So when dollar signs were superimposed on the poodle, CLIP's "piggy bank neuron" fired.
Artificial intelligence, real biases
While mistaking a poodle for a piggy bank is a relatively inconsequential error, CLIP also made mistakes that show the AI's bias.
In one example, OpenAI found a "Middle East" neuron was also associated with terrorism and an "immigration" neuron fired for Latin America. They also found a neuron that "fires for both dark-skinned people and gorillas."
It's an issue that Google's AI was also faced with back in 2015. Google apologized, but its only solution was to not allow Google Photos to label any pictures as gorilla, chimpanzee, or monkey.
OpenAI said this bias presents "obvious challenges" to CLIP ever being released into the real world.
"It is likely that these biases and associations will remain in the system, with their effects manifesting in both visible and nearly invisible ways during deployment," the study reads.
Because of that, OpenAI said it is still trying to determine if and how it would release a larger version of CLIP. But it hopes that by releasing its findings, it will further "community exploration."
Goh speculates that, like humans, CLIP is unable to correctly identify images because it is working too quickly. He points to the Stroop effect, a psychology test where people are asked to name the colour a word is written in, but not say the word itself. For example, the world blue in a red font.
"Strangely enough," Goh said, "humans also kind of defer to the written text itself. So you would say blue, even though the colour of the text is red."
But this only happens when people are asked to complete the task quickly.
"My suspicion is that this is part of part of the growing pains of artificial intelligence, if you will, because right now, perhaps the model doesn't have the ability to sort of reflect very deeply on the task it's being asked [to do]," Goh said.
Written by Sarah Jackson. Interview with Gabe Goh produced by Chloe Shantz-Hilkes.