Google never fail to amaze us. And their recent research into neural networks has done exactly that.
Using highly intelligent computers, Google have been conducting research into neural networks, also known as image and speech recognition. The computers feature technology that allows them to think like humans and successfully identify images and speech.
The way it works is in layers. So the computers will be shown a series of images to identify and will be corrected each time it makes a mistake. This way it is trained to understand what a certain image or sound is.
The process works in layers, so the image will be sent into the first layer and then 10- 30 more, to gradually form an answer for what the image is. First the computer might recognise the edges of an image, then the basic shapes and finally the details, which allow it to recognise exactly what the image is. Clever right?
Well, things got even more interesting when Google decided to start processing these layers in reverse order. By sending the final layer through first and the first layer through last, Google were able to see results from the computers in reverse order.
To their amazement, Google researchers found that these innovative human-like computers were able to see pictures that we couldn’t – almost as if they were dreaming. By setting the computers to over-interpret images, they were able to see results in greater detail.
From ants, starfish and screws to bananas, pig-snails and dog-fish – the results were surprisingly odd and have generated a lot of fascination and interest.
But don’t worry, this isn’t all for fun. Google researchers are aiming to use this information to find out more about how the computers think and at what level of abstraction they are able to process images. They’ll also be able to identify how much the network has progressed in its learning, which could also help researchers understand more about how humans think too.
Image Credit: Google