Watch Blade Runner recreated by an artificial neural network.
This is absolutely fascinating. What you are about to watch isn’t actually Ridley Scott’s Blade Runner. Instead it is a frame by frame reconstruction of the film using a type of artificial neural network called an autoencoder.
The model is a variational autoencoder trained with a learned similarity metric [Larsen et al. 2015 – arxiv.org/abs/1512.09300] that I implemented in TensorFlow at a resolution of 256×144. It has been trained on every individual frame of Blade Runner for 6 epochs before reconstructing the film.
It was the work of Terence Broad. He is a researcher living in London and working on a master’s degree in creative computing.
Broad’s goal was to apply “deep learning” — a fundamental piece of artificial intelligence that uses algorithmic machine learning — to video; he wanted to discover what kinds of creations a rudimentary form of AI might be able to generate when it was “taught” to understand real video data.
Broad wanted to teach an artificial neural network how to achieve this video encoding process on its own, without relying on the human factor. An artificial neural network is a machine-built simulacrum of the functions carried out by the brain and the central nervous system. It’s essentially a mechanical form of artificial intelligence that works to accomplish complex tasks by doing what a regular central nervous system does — using its various parts to gather information and communicate that information to the system as a whole.
Blade Runner – Autoencoded: Full film from Terence Broad on Vimeo.
Vox had a run down of what Broad did:
Broad decided to use a type of neural network called a convolutional autoencoder. First, he set up what’s called a “learned similarity metric” to help the encoder identify Blade Runner data. The metric had the encoder read data from selected frames of the film, as well as “false” data, or data that’s not part of the film. By comparing the data from the film to the “outside” data, the encoder “learned” to recognize the similarities among the pieces of data that were actually from Blade Runner. In other words, it now knew what the film “looked” like.
Once it had taught itself to recognize the Blade Runner data, the encoder reduced each frame of the film to a 200-digit representation of itself and reconstructed those 200 digits into a new frame intended to match the original. (Broad chose a small file size, which contributes to the blurriness of the reconstruction in the images and videos I’ve included in this story.) Finally, Broad had the encoder resequence the reconstructed frames to match the order of the original film.
In addition to Blade Runner, Broad also “taught” his autoencoder to “watch” the rotoscope-animated film A Scanner Darkly. Both films are adaptations of famed Philip K. Dick sci-fi novels, and Broad felt they would be especially fitting for the project (more on that below).
Broad repeated the “learning” process a total of six times for both films, each time tweaking the algorithm he used to help the machine get smarter about deciding how to read the assembled data.
Broad told Vox in an email that the neural network’s version of the film was entirely unique, created based on what it “sees” in the original footage. “In essence, you are seeing the film through the neural network. So [the reconstruction] is the system’s interpretation of the film (and the other films I put through the models), based on its limited representational ‘understanding.'”
Warner Bros. issued a DMCA takedown notice to Vimeo. In other words: Warner had just DMCA’d an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because it couldn’t distinguish between the simulation and the real thing.