When cars can drive themselves, who will program their morality?
Imagine driving quickly down a major city street.
What do you do? Do you swerve left and into oncoming traffic? Do you swerve right and crash into the building? Or do you put your foot on the brake and hope for the best?
It's a terrible set of choices, and you probably won't have time to consider them in that moment. But what if you could decide on a response ahead of time? Or, rather, what if your car was programmed to make a decision for you? Does it protect the kids? Or does it protect you?
Nav is part of a team of students working on the "Autonomoose," the university's own self-driving car. It's a Lincoln MKZ Hybrid that they've outfitted to drive itself around using mapping software, radar, sonar, laser sensors, cameras and other technologies. It looks and drives like a normal car, but the person in the driver's seat is merely a passenger.
"The speed at which the car can make decisions is far faster than a human can … ten decisions before you've even made one," Nav says. So it's not so likely the car will get itself in trouble in the first place.
But despite their capabilities, people might not totally trust autonomous vehicles just yet.
"Have they taken a bus before?" asks Ian Colwell, another grad student on the Autonomoose team. "Sure, you're not in control, but … this is just something else to put your trust in. Yes, it's a robot, but this thing can make way more decisions and perceive a lot more about the environment than any human."
These guys aren't responsible for making those ethical decisions just yet. But their technology can tell apart different kinds of obstacles – cars, poles, cyclists, pedestrians.
How the car should handle those obstacles is another matter, and will need to be answered sooner or later.