Google reveals what machines 'dream' about in trippy photo series
Lauren O'Neil | CBC News | Posted: June 23, 2015 10:25 PM | Last Updated: June 25, 2015
Artificial Neural Networks produce creepy amalgamated images when they're given nothing else to do
Do androids dream of electric sheep?
A report released last week by Google's research team provides new insight into the title question of Philip K. Dick's iconic 1968 novel — minus all of the post-apocalyptic ethical quandaries and such.
Titled "Inceptionism: Going Deeper into Neural Networks," the buzzworthy new Google Research post explains that while artificial intelligence has come a long way in terms of image recognition, surprisingly little is known about why some mathematical methods and models work better than others.
To help illustrate this point, software engineers Alexander Mordvintsev, Mike Tyka and Christopher Olah lifted the curtain on how Google builds its artificial neural networks.
They also shared examples of images produced by the networks in response to specific commands — and, more interestingly, images produced without any specific instructions.
"We train networks by simply showing them many examples of what we want them to learn," reads the post. "[We hope] they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn't matter (a fork can be any shape, size, color or orientation)."
To test the machine's learning, researchers decided to "turn the network upside down" by asking it not to find or choose images of something specific, as it's been trained to do, but to create images of things like bananas, ants and parachutes out of static based on its own existing knowledge.
They found that their machines had "quite a bit of the information needed to generate images too."
While the engineers admit that in some cases, their method revealed "the neural net isn't quite looking for the thing we thought it was," the resulting images are incredibly cool nonetheless.
What really caught the web's attention, however, were the photos produced by the networks (which are based on the biological structure of human brains) without any sort of guidance.
Google's research team refers to the following images, generated purely from random noise, as neural net dreams:
As it happens, you're more likely to find psychedelic dog-birds in a pinball machine than electric sheep.
Many online have pointed to the high number of weird animals found in these images. The researchers explained that this is because of the particular network they used at MIT's Computer Science and AI Laboratory to train the network.
"This network was trained mostly on images of animals, so naturally it tends to interpret shapes as animals," the report reads. "But because the data is stored at such a high abstraction, the results are an interesting remix of these learned features."
On the practical implications for this type of work, the research team wrote that, among other things, the technique could help them "understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training."
"The results are intriguing — even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes," the research team wrote."It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general."
"The results are intriguing — even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes," the research team wrote."It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general."