From precogs to T-rex, sci-fi films teach us about the ethics of new technology
It's said that good science fiction doesn't tell you about the world of tomorrow: it reflects what is happening today. And it also tells us a lot about the evolving relationship between humanity and technology.
That's what Andrew Maynard explores in his new book, Films from the Future: the Technology and Morality of Sci-Fi Movies.
Among the films he examines are Jurassic Park, Minority Report, and Ex Machina.
Here is part of his conversation with Spark host Nora Young. For the full, feature-length interview, click the "play" button above.
NY: So we're not going to be reviewing films today, but I want to focus on a few of the dozen films that you've covered in the book, and I want to start with Jurassic Park. So, briefly recap what the crux of the story is.
AM: Steven Spielberg really built on this idea that we can manipulate the genetic code of animals and organisms and create new ones. If we had the genetic code of dinosaurs, we could effectively recreate dinosaurs, so Jurassic Park was this idea about extracting DNA from prehistoric mosquitoes caught in amber (which unfortunately is scientifically impossible). But in the film, that DNA was used to recreate dinosaurs for this absolutely amazing "Jurassic Park" off the coast of Costa Rica. And then, of course, as so often happens with these stories, the entrepreneur and the scientists who were doing this were so wrapped up in what they could do that they didn't stop to think about the unintended consequences. And quite naturally everything goes horribly wrong—including the dinosaur starting to eat people.
NY: Of course, this all sounds so fantastical, and yet we've had developments in science and technology like CRISPR, for example, or stories about gene editing in humans that kind of shed this in a different light—and this is what I find so fascinating about this.
AM: Yes, back in 1993 this really was science fiction. It was just at the beginning of people beginning to realize that we could, in very simple ways, manipulate genetic code and begin to effectively redesign organisms in very, very crude ways. But it wasn't until we got to just a few years ago that people really worked out how to do this precisely and efficiently and relatively cheaply. And it's really quite frightening. In those years since 1993, we've gone from imagining a future that we couldn't do scientifically, to now people talking seriously about predetermining the genetic code of kids that are born into this world. One of the big questions with technologies that go all the way from things like gene editing and CRISPR, through geoengineering the climate or even artificial intelligence, is how do we work out where those future tipping points are, where suddenly we can't predict what's going to happen—and how do we stay clear of them?
AM: When I was writing the book, I was actually shocked at how close we're getting to some of the things that you see in the movie. Of course in the movie, the plotline is based on an entirely fictitious technology: the idea that these "pre-cogs" were genetically engineered so that they could predict murders in the future. And that's not going to happen. That's total fantasy. What we do see though, now, is that the use of massive datasets, combined with machine learning and a number of other techniques, is beginning to give people what feels like an ability to predict when somebody is going to take part in criminal behavior, or whether criminal acts are likely to occur. There's an awful lot of uncertainty around the validity of those predictions. But we're moving very fast to a world where people think they can identify who the bad people are and they can take action before they do something that somebody considers to be bad.
NY:This is the example of predictive policing.
AM: It absolutely is. You've now got companies that have got systems already deployed in the field, where the police can, using a range of data, try and work out where criminal activity is going to occur. And they can actually take action before it occurs. Notionally, this sounds really good if we want to be in a safer society: why not try and work out where the bad people are, and where the bad things are going to happen. And that's all great until the system gets it wrong, and we start penalizing people for what they might do, rather than what they did do.
NY: The last film I want to talk about is Ex Machina, from 2014. Very broadly, it's about an advanced robot with human-level general intelligence that we don't fully understand. In talking about this film, you look at what you call the "lure of permissionless innovation." How would you describe that?
AM: So there's a strand of thinking around innovation generally—and especially technology innovation—that says that people should be allowed to pretty much do anything they like, and if they create something that's got problems, they'll be ironed out in the future: either that the market will self-adjust and people just won't buy into this technology, or it will find a fix for it. At the base of the argument is that human creativity and human ingenuity don't want to be bounded, and therefore if we're going to create bigger and better technologies—including technologies that are going to solve some of the world's biggest problems, we need to be able to do that without permission. And it's a great idea. Speaking as a scientist I fully get this. I would love to be able to go into the lab (when I actually had a lab) and just do whatever I fancied. The problem is, there are some things that we can do where the impacts are so serious and so not reversible, that we can't afford to just do things and ask forgiveness afterwards.
NY: The interesting thing for me about Ex Machina is that it raises this question that we come back to all the time: What does it really mean to be human? This is a particular fascination of science fiction. Why do you think this is such an interesting question for us?
AM: I think it's always been interesting because we're trying to work out who we are and how to define ourselves. And that's historically been a deep question of philosophers and philosophical thinkers. It's becoming even more relevant these days, though, as the science and the technology we're developing is giving us the ability to fundamentally change who we are. So, if you think if you can begin to change somebody's genetic code, or if you can begin to actually physically change their body by implanting machines, or—thinking very futuristically—if you can upload the essence of their mind into a computer, all bets on what it actually means to be human are off. We are being put in the position where we've got to grapple with how we define either humanity or personhood, because we're actually beginning to transcend the limits of biology in small ways at the moment. But I think this is going to accelerate as we go along.