Would you buy a car that would sacrifice your life in order to save the lives of 10 pedestrians? Or would you buy one that would value your life over any other consideration?
That's the question researchers are mulling in a new report about driverless cars, released today in the journal Science.
- Ready or not, Tesla Autopilot means self-driving cars are already on Canadian roads
- Uber tests 1st self-driving car in Pittsburgh
Driverless cars have the potential to revolutionize road transportation — by being smarter than human drivers, they could save thousands of lives lost every year due to traffic accidents.
They could also be more efficient in heavy traffic and cut down on pollution.
Researchers Azim Shariff, Jean-François Bonnefon and Iyad Rahwan surveyed hundreds of people to gauge how they think driverless cars should behave.
Their results reflect the murkiness of human ethics: they found that while most people support automated vehicles that respond to save the greater number of people in the case of a collision (even if that means sacrificing the car's passenger), they wouldn't buy such a seemingly cold and calculating vehicle for themselves.
The researchers caution that before driverless cars end up on the roads, programmers must take into account the complex moral problems these algorithms will have to solve, especially when the problems involve life or death.
Sacrifices for the greater good
Driverless cars could provide massive safety benefits, considering that 90 per cent of traffic accidents are caused by human error, the researchers say.
And the vehicles are no longer just science fiction — Google has been testing them for years, and Tesla, Audi and other companies are developing their own models.
For the study, the researchers surveyed hundreds of people about various scenarios with driverless cars.
One scenario was whether the driverless car should save its passenger while causing the deaths of 10 pedestrians, or whether it should go for the greater good, and save the pedestrians instead.
"The strong majority of people feel that the car should sacrifice its passenger for the greater good," said Bonnefon.
Bonnefon said this applies even when participants were asked to imagine family members in the car with them.
"They were not always strongly confident in this respect, in particular when they were thinking of their child in the car, but even in that very emotional situation … they say that even themselves or the child should be sacrificed if it could save the lives of many people on the road," said Bonnefon.
And the researchers said the participants would welcome these automated cars on the roads.
But while people support the so-called moral cars, they wouldn't want to buy one for themselves. Instead, they would opt for a vehicle that would always protect them — the passenger.
"With driverless cars, the public good here is public safety," said Rahwan. "And to maximize safety, people want to live in a world in which everybody always drives in the cars that minimize casualties. But they want their own car to protect them at all costs."
Tragedy of the algorithmic commons
The researchers suggest that one way to get around this instinct to self-preserve over the greater good is with regulation.
This isn't so far-fetched: governments have all kinds of so-called "greater good" regulations, such as environmental protections or vaccination laws.
- Google to pay Arizona drivers $20 an hour to test self-driving cars
- ANALYSIS | Driverless cars get potential new testing ground in Stratford, Ont.
But participants were not enthusiastic about the idea of laws that would only allow vehicles on the road that are programmed to opt for the greater good, which would mean that sometimes, passengers might be sacrificed.
This resistance to regulation could create a huge barrier to the adoption of what the researchers call potentially ground-breaking technology.
'Even if we have this race to the bottom where everybody ends up choosing the self-protecting car — that's still going to be way better than the system we have now in terms of saving lives.' – Azim Shariff, co-author of the report
"If we try to use regulation to solve the public good problem of driverless car programming, we would be discouraging people from buying those cars," said Rahwan. "And that would delay the adoption of the new technology that would eliminate the majority of accidents."
This is rooted in what the authors refer to as the tragedy of the commons, or, in this case, what Rahwan has dubbed the "tragedy of the algorithmic commons."
If both types of automated vehicles are on the market, the research suggests that people will inevitably choose the self-protecting car.
"Even if you started off with one of the noble people who is willing to buy a self-sacrificing car, once you realize that most people are buying self-protective ones, then you really reconsider why you're putting yourself at risk to shoulder the burden of the collective when no one else will," said Rahwan.
But regardless of what kinds of automated vehicles get on the road — self-sacrificing practical ones, or those that protect their passengers at all cost — Shariff said everyone would be much safer with driverless cars on the road.
"Even if we have this race to the bottom where everybody ends up choosing the self-protecting car — that's still going to be way better than the system we have now in terms of saving lives."