Out In The Open

When cars can drive themselves, who will program their morality?

Out in the Open takes a ride in the University of Waterloo’s autonomous vehicle.
University of Waterloo master’s students Nav Ganti and Ian Colwell stand with the ‘Autonomoose’ self-driving car. (Sam Colbert)

Update | July 22, 2018: Since this story was originally published, our producer Sam Colbert has become a licensed driver.

Imagine driving quickly down a major city street.

A couple of kids come running out of a building on your right, and they end up on the road and in front of you car. If you step on the brakes, you likely won't stop in time. 
The University of Waterloo’s self-driving car looks and moves like a normal vehicle, but the person in the driver’s seat is merely a passenger. (Sam Colbert)

What do you do? Do you swerve left and into oncoming traffic? Do you swerve right and crash into the building? Or do you put your foot on the brake and hope for the best?

It's a terrible set of choices, and you probably won't have time to consider them in that moment. But what if you could decide on a response ahead of time? Or, rather, what if your car was programmed to make a decision for you? Does it protect the kids? Or does it protect you?

These are the kinds of questions that might face makers of self-driving cars. And who will answer them? Ethicists? Politicians? Engineers? Lawyers?

"We've been asked this question quite a bit," says University of Waterloo graduate student Nav Ganti. "Any demo we've been to, basically, this question has come up." 

Nav is part of a team of students working on the "Autonomoose," the university's own self-driving car. It's a Lincoln MKZ Hybrid that they've outfitted to drive itself around using mapping software, radar, sonar, laser sensors, cameras and other technologies. It looks and drives like a normal car, but the person in the driver's seat is merely a passenger.

"The speed at which the car can make decisions is far faster than a human can … ten decisions before you've even made one," Nav says. So it's not so likely the car will get itself in trouble in the first place. 

But despite their capabilities, people might not totally trust autonomous vehicles just yet. 

"Have they taken a bus before?" asks Ian Colwell, another grad student on the Autonomoose team. "Sure, you're not in control, but … this is just something else to put your trust in. Yes, it's a robot, but this thing can make way more decisions and perceive a lot more about the environment than any human."

These guys aren't responsible for making those ethical decisions just yet. But their technology can tell apart different kinds of obstacles – cars, poles, cyclists, pedestrians. 

How the car should handle those obstacles is another matter, and will need to be answered sooner or later.

This story originally aired on October 29, 2017. It appears in the Out in the Open episode "Can Robots Be Human?".