Ideas·Ideas Afternoon

Killer robots march into uncharted ethical territory

What happens if autonomous weapons fight our wars? What if they select and kill targets without any human intervention? The world is closer to this scenario than ever before. But there's no consensus on whether — or even how — it would ever be ethical. This episode delves into the complex conundrums of robot warfare.

Campaigners want ban before lethal autonomous weapons become the norm

Noel Sharkey (right), a robotics expert, was one of the first to publicly warn about the perils of autonomous weapons. (Submitted by Campaign to Stop Killer Robots)
Listen to the full episode53:59

By Nahlah Ayed

Jody Williams has decades of peace activism under her belt, and a Nobel Prize for securing a global ban on anti-personnel landmines.

But as she told CBC IDEAS, her fight against killer robots is presenting a whole new level of daunting. 

The effort to ban lethal autonomous weapons is growing more urgent — and more frustrating — given the lightning-fast progress in the technology that's edging them ever closer to becoming a reality. Stiff resistance from some world powers like the U.S., China and Russia is making it a ban much harder to achieve than the Ottawa Treaty to ban landmines.

Jody Williams was awarded the Nobel Prize in 1997 for her staunch efforts convincing governments to ban landmines, thereby eliminating threats to civilians. (Morten Holm/AFP/Getty Images)

The trend toward more artificial intelligence and autonomy in the modern battlefield is considered a "third revolution" in warfare, after gunpowder, and nuclear technology. 

But put the image of "The Terminator" out of your head: the autonomous "killer robot" generation of weaponry under development and testing includes everything from tanks to drones, to vehicles.

It's the most urgent weapons issue facing us now.- Jody Williams, Stop Killer Robots campaign

The world's major powers see huge potential in the use of lethal autonomous weapons to improve speed, stealth and the protection of soldiers in combat.  

Activists say world powers are engaged in the beginnings of a new arms race — and whoever wins stands to rule the modern battlefield.

Stepping inside murky terrain

Proponents argue lethal autonomous weapons systems, or LAWS for short, could actually make fighting more humane — including lowering the number of civilian casualties.

Activists, backed by academics, and a growing number of robotics and artificial intelligence experts, are warning such weapons are marching us into murky ethical territory we've never set foot in before.

"I think it's the most urgent weapons issue facing us now," said Williams, who's with the Campaign to Stop Killer Robots.

The campaign aims to see an international treaty signed that would preemptively ban any lethal weapon that could select targets and kill without meaningful human control. 

The campaign helped elevate the matter to the UN's agenda nearly a decade ago. But the discussions in Geneva have been ongoing since 2013, without any significant  move toward negotiating a treaty. Compare that with the landmines effort, which took a total of five years to produce a treaty.

"It's moving like molasses uphill in winter," said Williams.

Activists pose in Geneva, next to a monument dedicated to the victims of landmines. Many of the campaign's activists are veterans of the fight to ban landmines. (Submitted by Campaign to Stop Killer Robots)

Meantime, testing, development and improvement in LAWS continue.

Activists say LAWS could tempt more military powers into more wars. Robotics experts say that as computer based technology, LAWS could be vulnerable to hacking. The technology could also fall into the wrong hands and unleash mayhem.

Activists also have a considerable list of ethical concerns — including how to assign responsibility when war crimes are committed, and the question of whether to allow machines to take human life at all.

"The decision to kill should never be taken lightly," said Peter Asaro, professor of robotics and autonomous technology at the New School in New York. He's also a co-founder of the International Committee for Robot Arms Control.

"It's a very exceptional situation of warfare and self-defence that we allow people to kill because you're defending your own right to life in a sense. And machines don't have a right to life, they don't have a life, they don't understand what a life is, what it means, its significance, what it means to take it away."

A way to eliminate suffering

At the opposite end of the argument, they insist that lethal autonomous weapons could provide moral gains: one example is machines could be more accurate and less emotional warriors than their human counterparts, who are prone to stress, depression, exhaustion or hunger— all leading to costly mistakes.

"Autonomous weapons systems simply don't suffer from any of those human psychological failings," said Don Howard, a philosophy professor at the University of Notre Dame.

"And so right away you have the possibility of eliminating the circumstances that we well know are productive of a lot of war crimes."

He also argues that autonomous weapons could allow countries to get involved in high-risk conflicts that might have been avoided in the past–conflicts such as the Rwandan genocide — to avert mass killing at little risk.

Small robots are being tested as swarms at the Sheffield Robotics lab, not for military use but for possible application in space exploration. (Nahlah Ayed/CBC)

The talks at the UN's Convention on Certain Conventional Weapons in Geneva have been mired in debate over definitions and the utility of existing laws.

Countries resisting the idea of a ban include the U.S., China, Russia, Israel, and South Korea. Their resistance makes a ban politically unfeasible, said Howard.

There's also the challenge of living in times when it's become, he adds, "harder and harder to write new international law, to develop binding treaties that get the support of a sufficient majority of nations."

Please don't automate killing my children.- Noel Sharkey, robot expert

Proponents of these weapons say that existing international laws are sufficient. 

Activists say an international ban is the only way to protect against lethal autonomous weapons becoming an unchecked battlefield reality.

"You can automate building my car with no trouble," said Noel Sharkey, Emeritus professor of Robotics at Sheffield University.

"But please don't automate killing my children."


Guests in this episode: 

  • Ryan Gariepy is the chief technology officer at Clearpath Robotics
  • Alain Tremblay is the vice president of business development and innovation at Rhinemetall Canada
  • Jody Williams is a Nobel laureate, activist and campaign ambassador for the International Campaign to Ban Landmines 
  • Mary Wareham is the advocacy director of the Arms division of Human Rights Watch. 
  • Erin Hunt is the program manager with Mines Action Canada
  • Noel Sharkey is an Emeritus professor of AI and Robotics at the University of Sheffield
  • Peter Asaro is a professor who specializes in Robotics and autonomous technology at the New School in New York.
  • Paul Hannon is the executive director of Mines Action Canada
  • Don Howard is a professor of philosophy at the University of Notre Dame
     


** This episode was produced by Nahlah Ayed.

Comments

To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.