Remember those amazing images from last week that looked like a computer having an acid trip? What if you could watch them live?
A group of PhD students from Belgium have created a Twitch livestream which lets viewers look in to the mind of a neural network as it “dreams”.
The livestream builds on research published by Google last week, when the search firm demonstrated a method of using an image recognition algorithm to generate images, as well as simply categorising them.
Google’s algorithm instructed a neural network which had been trained to recognise specific aspects of an image (say, animals or buildings) to subtly alter the image, strengthening the aspects it recognised, and then feed the output back into the start of the process. In doing so, it was capable of generating alarmingly detailed dreamscapes from scratch.
The Twitch stream, created by Jonas Degrave, Lionel Pigou, Aaron van den Oord and Sander Dieleman in Belgium’s Ghent University, builds on the process. The live video is created by taking a randomly generated frame, zooming slightly, rotating slightly, and applying the algorithm in order to strengthen the desired feature.
As for what specific feature the neural network looks for, viewers can suggest objects in the chat channel for the stream, and a new one is automatically selected every few seconds. The past few minutes have seen the AI looking for a tree frog, then a welsh corgi, then a spider-monkey, then a cello, all in a row.
An interesting quirk of the methodology is that the previous image sticks around, rather than being reset each time, so that an image clearly incorporating spider-monkey features slowly morphs into one that looks more like a nightmarish collection of technicolour cellos: lines get straightened out, strings get added in, and eyes become ƒ-holes.