Inceptionism: Google researchers give networks LSD

June 22, 2015

What happens if one sets an artificial network of simulated neurons on ordinary photos? Ideally, it will tell you what can be seen in the picture. In an extreme case, if you can still continue browsing the network for meaning, it sees things that are not there. In a manner similar to our own perception astounding.

The basic principle, work on the neural networks such, is always the same. They consist of two or more layers of nodes, the simulated neurons. Are attached to the layers with many links. Typically, each node of a layer is connected to all nodes of the next. The input nodes are available for elementary features, they could represent, for example, the pixels of a given image.

“Thus strengthened, that it caused a certain interpretation”

If an input node is activated, he handed this activation further about his connections to the nodes of the next layer. The compounds were weighted – you can imagine thick as different. The thicker the compound, the stronger the activation, which arrives at the next node. Taught is a network in a sense reverse: If the output layer is not producing the desired result, the weightings of the connections by using a mathematical mechanism layer by layer are adjusted so that the result is better suited to actual input next time. With many passages which networks can learn to link inputs with outputs correctly. Already in the eighties learned neural networks for example, to conjugate English verbs (see below).

Neuronales Netz aus klassischem Forschungsartikel von 1985: Input- und Output-Knoten, gewichtete Verknüpfungen
Zur Großansicht

David E. Rumelhart / James L. McClelland

Neural network of classic research articles from 1985: input and output nodes, weighted links

Today neural networks, for example in the field of voice and image recognition in use. Among other things at Google. A research group led by the Russians Alexander Mordvintsev used there neural networks with “10 to 30 layers” to recognize images.

Everyone knows the phenomenon from their own experience

Mordvintsev and his colleagues train their networks in a similar manner as you did back in the eighties that – only that the networks and the results are much more complex : “We know that each layer after Training always higher features of the image extracted, until the last layer that decides in a sense, what the picture shows. ” A layer search for example by edges or corners, later to “rough forms or components, such as a door or a sheet.” The last layer of joining these components then full interpretation of a picture together.

In order to show how such a network is working, Mordvintsev and his colleagues have now found a certain extent on the head the operation. “We asked it to step up an input image so that it caused a certain interpretation.” So the network was applied about to see in a picture full of black and white pixel noise ants, starfish or screws and highlight the relevant parts of the picture accordingly.

The results are astounding, and they remind us of a phenomenon that everyone knows about himself: If one looks at unstructured visual stimuli, it was a woodchip wallpaper, a carpet or a cloud, you will be in it sooner or later objects or faces to believe detect. Exactly now did the artificial neural networks. Mordvintsev and his colleagues have their technique “Inceptionism” baptized , based on Christopher Nolan’s dream multilayer film .

Pig with snail, fish with dog face

Next, the researchers supplied their network with photos of real objects or landscapes, but commissioned a certain layer of the network, highlighting “his” perception. Got early layers of the network this order, emerged as strip or squiggle pattern, “because these layers are sensitive to basic features such as edges and their orientations”.

However, the subsequent, deeper layers of the network have been made to the masters of interpretation, something amazing happened: From an unglamorous photo white clouds against blue sky extracted about the network gradually fantastic objects and figures, about a being that such a pig with snail looks, or a fish with the face of a dog .

When this technique is applied to different images, thereby come about insane, often oddly disturbing images that are reminiscent of the works of artists who capture the hallucinations have tried. They act as if the researchers their networks administered psychoactive drugs – and that’s true in some ways. Even human hallucinations are often nothing more than the over-interpretation of certain features, which produces an overwrought visual cortex from the input that provide the eyes.

Now neural networks are so to the point where they not only learn much like human children – they can also create images that can no longer be distinguished from psychedelic art of human hands.

Neural networks: History

The idea that one could replicate the morphology of the human nervous system to something like teaching machines such as thinking, learning or perception, even dating back to the forties of the last century. For a long time but remained the so-called neural network models rather rudimentary, a field for specialists with particular attention to abstraction. Then, in the first half of the eighties, this changed thanks mainly to a single study. The psychologists David Rumelhart and James McClelland revealed that as an extremely rudimentary pseudo brain can learn to form the past tenses English verbs correctly – and temporarily in the course of the learning process makes the same mistake as a human child at the same operation. Instead of “went” threw the net in response temporarily “goed” from -. So it applied the rule correctly, but just for one irregular verb


The network learned rules and then the exceptions to these rules – without a single rule has ever been formulated explicitly. The study sparked in cognitive science from a small boom, suddenly neural network models have been applied to all kinds of problems, the term “connectionism” for the new science came on. Then came the Internet, the digital revolution took its course, and suddenly there was computing power and corresponding computer galore. Today neural networks are no longer just models for psychologists – they are that become powerful tools in the hands that want to enable computers to do the seeing, thinking, interpreting.

Source link

You Might Also Like