Sunday June 12, 2016

An Artificial Intelligence remade Blade Runner

An image of from the movie Blade Runner, reconstructed through a neural network.

An image of from the movie Blade Runner, reconstructed through a neural network. (Terrance Broad)

Listen 10:59

Terence Broad is an artist and computing student who has created a new version of Ridley Scott's 1982 movie 'Blade Runner'. "Some people say it looks like memories," Terence says. "You have prolonged periods where it's almost identical to the original scenes from Blade Runner. Then all of a sudden it loses the tracking and it explodes into this blurry noise, and it doesn't really know what to make of what it's seeing."

The "it" Terence refers to is a neural network (a kind of artificial intelligence based on a series of simple programs, modelled after the neurons in our brains, that are linked) allowing them to feedback to each other and learn from new input. It's the same technology that is behind Google's AlphaGo program, and that created all those creepy pictures of dogs and pagodas. It's also part of the AI that's meant improvements in image recognition, or the kind of speech recognition that lets you talk to your smartphone and get an answer.

Terence remembers going to an arts festival where "...they presented this work where they were able to reconstruct video based on what people were seeing [while] in an MRI machine. It absolutely blew my mind..." While that work allowed you to see inside of a human brain, Terence's system tries to look inside of a computer's "brains" --- that neural network --- to see how the computer interprets, and reconstructs, images from the movies.

Aja Romano wrote about the Terence's Blade Runner remake for the website Vox. She says that the project is based on the idea of "encoding". "Traditionally," Aja says, "humans... have come together and... created systems that everyone agrees on that basically tell... computers how to interpret different types of images... and image formats. Encoding is essentially when you break an image down into its composite digital parts and then put it back together again."

Encoding, as Aja explains, is what allows you turn a video or song into zip files or MP3s, send them to another computer, and have that other computer put the data back together again so you can play that song or movie. "What Terence set out to do is to create a neural network that could read and, sort of, make a judgement call about how to tear these numbers apart and put them back together again, on its own."

An image of Roy Batty's eye, reconstructed by a neural network
(Terrence Broad)

"Really, I just really wanted to see what it would look like," says Terence. After seeing the MRI video project, which was able to interpret a video entirely through brain data and pattern recognition, he thought "... I wonder what this would look like with an artificial neural network, rather than a human brain."

"You not only have a guy training an AI to deliver a very specific sort of art product," says Aja. "You have a rudimentary sort of artificial intelligence essentially helping to create art." And aside from the artistic accomplishments, Terence's project could have technical possibilities as well. "The more freedom you can give a neural network," Aja says, "or some sort of artificial entity to compress and decompress things without having to rely on a universal, human sanctioned standard, that gives us more freedom to encode more things, experiment with encoding, and the possibility with video and audio encoding are pretty wide, I think."