Quit using The Terminator as an example of AI gone wrong, argues BBC Reith Lecturer
'It makes people think that autonomous weapons are science fiction. They are not. You can buy them today.'
When journalists and editors need a way of illustrating an article about the risks of Artificial Intelligence, one pop culture source comes up again and again.
The 1984 film, The Terminator.
Stuart Russell, professor of computer science at UC Berkeley and founder of the university's AI lab wishes they would stop.
"I've tried to convince journalists to stop using this image for every single article about autonomous weapons, and I've failed miserably."
Russell is this year's speaker for the BBC Reith Lectures. In his series, he warns that as AI technology spreads rapidly through the worlds of business, warfare, and our personal lives, there is a risk of losing control. But 'SkyNet' — the evil AI from the Terminator franchise — is the wrong metaphor.
"Many films such as The Terminator would have you believe that spooky, emergent consciousness is the problem. If we can just prevent it, then the spontaneous desire for world domination and the hatred of humans can't happen," he told IDEAS.
Russell has three main problems with The Terminator metaphor.
"This Terminator picture is wrong for so many reasons. First of all, the Terminators fire a lot of bullets that miss their targets. Why do they do that? Secondly, it makes people think that autonomous weapons are science fiction. They are not. You can buy them today," Russell explained.
"Third, it makes people think that the problem is SkyNet, the global software system that controls the Terminators. It becomes conscious, it hates humans, and it tries to kill us all."
Black Mirror closer to reality
The threat of AI weapons is not that they might turn against us, but that they'll be extremely good at doing what we ask of them, according to Russell.
"SkyNet never was the problem. If you want a better picture from science fiction, think about the TV series Black Mirror and specifically the robot bees from the episode Hated in the Nation. They aren't conscious. They don't hate people. They are precisely programmed by one person to hunt 387,0046 specific humans, burrow into their brains, and kill them."
Russell says Black Mirror's drones are far more likely to become reality than SkyNet.
"A lethal air-powered quadcopter could be as small as a tin of shoe polish. And this is where the shape charges and explosively formed penetrators come in. About three grams of explosive are enough to kill a person at close range. A weapon like this could be mass produced very cheaply. A regular shipping container could hold a million lethal weapons, and because by definition, no human supervision is required for each weapon they can all be sent to do their work at once.
"And if we know anything about computers, it's that if they can do something once, they can do it a million times. So the inevitable endpoint is that autonomous weapons become cheap, selective weapons of mass destruction."
Slaughterbots pushes a ban on autonomous weapons
In 2017 Russell and others working for a ban on autonomous weapons systems produced a film called Slaughterbots to show a more realistic scenario.
"It had two storylines. One: a sales pitch by the CEO of an arms manufacturer demonstrating the tiny quadcopter and its use in targeted mass attacks. The other, a series of unattributed atrocities, including the assassination of hundreds of students at the University of Edinburgh. The reactions elsewhere were mostly positive. The film had about 75 million views on the web, and I'm pleased to say that CNN called it the most nightmarish, dystopian film of 2017."
There have been several attempts to negotiate international treaties around the use of autonomous weapons. At the most recent meeting of the UN's Convention on Certain Conventional Weapons in Geneva, most members supported new laws limiting the use of autonomous weapons. The United States and Russia blocked attempts at a ban, instead calling for the creation of a code of conduct.
In the BBC Reith Lectures, Russell reiterates his calls for a ban on autonomous weapons systems.
"With all due respect, there are eight billion people wondering why you cannot give them some protection against being hunted down and killed by robots. If the technical issues are too complicated, your children can probably explain them."
Listen to all four of Russell Stuart's BBC Reith Lectures where he examines the impact of AI on jobs, military conflict and human behaviour. He also shares his suggestions on a way forward based on a new model for AI, one based on machines that learn about and defer to human preferences.
*This IDEAS episode was produced by Matthew Lazin-Ryder.