How to build an ethical driverless car
Suppose you're driving and facing an imminent collision. You either have to steer your car towards one bystander or a car filled with passengers.
Which would you pick?
This ethical dilemma is often contemplated by philosophers -- and professionals working in the field of autonomous vehicles. But it isn't the only ethical challenge with self-driving technology.
When we're presented with new options, we need to evaluate them for the first time to ask whether these actions are right or wrong and how we can strike the appropriate balance.- Ryan Jenkins
Given how ubiquitous self-driving cars could be in the near future, Ryan Jenkins, an assistant professor of philosophy at California Polytechnic State University, believes designers need to carefully consider all the ethical problems before they put these machines on the road.
Cybersecurity and privacy issues with self-driving cars
Take cybersecurity for example. If a hacker manages to gain control of your car's brakes and steering wheels, the consequences could be deadly.
Ryan recalls a recent case where someone hacked into a jeep remotely through the vehicle's Pandora connection. "For whatever reason, if you were able to hack into the infotainment system, you were also able to gain control of the computer that controls the wipers, to the transmission and the brakes." This is a big problem with driverless cars since there isn't a simple way to separate the systems. "Even the most sophisticated and motivated organisations had their systems compromised by malicious actors," Ryan said. "That leads me to think the perfectly secure system is a pipe dream."
Another concern is privacy. Few things are more personal and more sensitive than having access to your location at all times. But self-driving cars run on data, which means car manufacturers would have control over your personal information and could potentially sell that to a third party.
Why we need to examine the ethics of new technologies
"Technologies make new actions possible for humans and that's precisely part of their attraction," Ryan said. "But when we're presented with new options, we need to evaluate them for the first time to ask whether these actions are right or wrong, and how we can strike the appropriate balance between the values we hold dear as a community."
The typical person understands cost-benefit analyses and the impact that machines might have on human health and the environment. But that's still a very small subset of the kinds of moral concerns technologies have raised, Ryan pointed out.
"We anticipate self-driving cars because they enable the movement of disabled people. But have we considered what happens when we need the assistance of a human being? what happens if they fall while getting out of a car? what happens if they're blind?" he said. These problems need to be accounted for as well when we're designing these technologies so that they can minister the full range of human needs.
In order to implement moral design, everyone involved in the technology need to get together and have a frank discussion about the goals they see in designing these machines, and the kind of goals they ought to serve, Ryan advised.
Perhaps the easiest to motivate corporations is to point to the public anger when companies failed to take consumer interests into account. "At the end of the day, the easiest way to motivate might still be to appeal to the bottom line."