As Canada prepares to take part in international talks on so-called "killer robots" next week, documents obtained by CBC News show defence officials see risks but also military advantages to deploying deadly autonomous weapons.
Records released under the Access to Information Act show officials at Foreign Affairs and National Defence are keeping an open mind as they carve out a Canadian position on the controversial systems — in spite of growing calls for a pre-emptive global ban.
- AI could destroy humans, Stephen Hawking fears: Should you worry?
- Can an AI program be a criminal?
- The Second Machine Age
Lethal autonomous weapons systems (LAWS) are not currently in use, but could eventually have the ability to select, target and engage in deadly attacks without human intervention.
Censored emails, reports and briefing papers released to CBC were prepared last spring when the first United Nations meeting was convened on the issue. One 17-page report outlines the Defence Department's "initial thinking" on the military, strategic, diplomatic and ethical implications, flagging moral questions but also potential benefits.
That paper, which will help shape Canada's position on the issue, says the weapons "clearly promise" many of the same benefits as unmanned, human-controlled systems now in use — including limiting risks to military personnel, driving down costs, allowing penetration of enemy lines with little risk, and circumventing human shortcomings with faster response times and no fatigue or boredom.
"In short, weapons at various stages of autonomy, including LAWS, offer military advantage in several clear — and perhaps also unforeseen — ways," the report reads.
On Monday, officials and experts from around the world will meet again at the UN in Geneva. Defence spokeswoman Ashley Lemire confirmed Canadian representatives will attend.
'[Robots] don't get scared. They don't get mad. They don't respond to a situation with rage.' — Steven Groves, Washington-based Heritage Foundation
While Canada is not currently developing any lethal fully autonomous weapons systems, Lemire said Defence Research and Development Canada (DRDC) has an active research program on unmanned systems that informs policy on the opportunities and threats the technologies could pose.
"As part of this work, DRDC is initiating research activity into ways to counter and/or defend against non-lethal and lethal autonomous weapons systems," she said, adding: "We would not want to speculate about potential applications of lethal autonomous weapons systems at this point."
Calls for a ban
Walter Dorn, a professor at the Royal Military College of Canada, has urged limits to ensure there is always an element of human decision-making in carrying out lethal force. No matter how advanced the technology, there is always the potential for glitches and malfunctions with technology that could harm soldiers or civilians.
"There is potential for great utility and great danger," he said.
But an international coalition of human rights activists, academics and security experts called the Campaign to Stop Killer Robots says that because technology is advancing so rapidly, world leaders must adopt a treaty to ban the weapons. Alex Neve, secretary general of Amnesty International Canada, said lethal weapons without human control — whether they're used for policing or military purposes — would violate international humanitarian law.
"Allowing robots to have power over life and death decisions crosses a fundamental moral line: the killing of humans by machines is an ultimate indignity in a certain sense, and humans should not be reduced to mere objects," he said.
He sees a ban as the "only real solution."
"Taking a wait-and-see approach could lead to further investment by states in the development of these weapons systems and their rapid proliferation in a new arms race," he warned.
The Defence Department documents point out that countries like China and Russia are "rapidly moving toward developing unmanned and autonomous systems," and that changes could revolutionize modern warfare.
"Depending how technology progresses, it may be possible to fight a war without ever leaving North America or Europe," it reads, noting the potential for major shifts in power for high-tech countries in military alliances.
What are 'autonomous' weapons?
Documents obtained by CBC News outline a spectrum of "autonomy" in military weaponry that could shift or even eliminate humans in decision-making:
- Semi-autonomy: Weapons system waits for human input before taking action. A human is "in the loop."
- Supervised autonomy: A human is "on the loop," monitoring and over-riding weapons systems if necessary.
- Full autonomy: Weapons systems, once activated, can select and engage targets without further intervention by a human operator. The human is "out of the loop."
The report also notes how the U.S. used robots in Iraq to clear explosive devices, and suggests those kinds of applications could be enhanced in future.
"Fitted with weapons, robots could be used for house-to-house clearance operations in modern urban combat," the paper reads. "Unlike human soldiers, they could be programmed to 'shoot second' with high accuracy — or even give an enemy the opportunity to surrender after the enemy has fired his weapon — thus potentially decreasing civilian casualties and increasing the chance of capturing enemy combatants."
The report also raises concerns about a potential arms race — and flags the danger if weapons wind up in the hands of non-state actors or repressive governments. But legal concerns might be "somewhat misplaced," the report says, adding the driving force for a ban is more likely "an under-explored moral unease over giving machines the power to kill."
'They don't get mad'
Steven Groves, senior research fellow with the Washington-based Heritage Foundation, said a ban would be premature. He said those pushing for a ban depict the worst-case scenario of an evil humanoid "Terminator-type" robot roaming through crowded cities, which is likely a long way off.
He points to clear advantages machines have over human fighters.
"There are a number of ways these things can be deployed where they may even be more accurate than humans. They don't get scared. They don't get mad. They don't respond to a situation with rage — and we've seen some tragic consequences of that happening in Iraq and elsewhere where … a very human soldier reacts to a terrible situation and ends up killing a lot of civilians."
Groves said the history of warfare is about putting more distance from the enemy — from the bow-and-arrow, to the cannon, to bombs, to drones. And while there are legitimate concerns it could escalate conflict, the opposite could prove to be true.
"If the U.S. has developed weapons that are so accurate, so deadly, so thorough and can deploy them in a way that no U.S. soldiers are put at risk to effectively seek out and destroy the enemy — I wonder if I would start a war with the United States. I wouldn't start it very lightly."