Spark

Would you trust a robot in an emergency?

Ya, probably.
Georgia Tech researchers built the “Rescue Robot” to determine whether or not building occupants would trust a robot designed to help them evacuate a high-rise in the case of fire or other emergency. (Rob Felt, Georgia Tech)

Most robots and A.I. that we encounter are pretty innocuous. Think of the Twitter bots that fill our fill our feeds. If something goes wrong with them, the worst that is likely to happen is a nonsensical tweet and the loss of a few followers. But that isn't always the case. 

GTRI Research Engineer Paul Robinette adjusts the “arms” on the “Rescue Robot,” while (L-R) GTRI Senior Research Engineer Alan Wagner and School of Electrical and Computer Engineering Professor Ayanna Howard look on. (Rob Felt, Georgia Tech)
When the stakes are higher, say with self-driving cars, or bots that can make a medical diagnosis, mistakes could be catastrophic.

So how much should we trust our technology, and how do we know when a piece of tech is no longer trustworthy?

study from the Georgia Institute of Technology wanted to see where we draw those lines. 

Dr. Ayanna Howard, a robotics engineer at Georgia Tech, as well as her colleagues Alan Wagner and Paul Robinette, had participants follow a robot to a conference room, where they were asked to fill out a survey. In some cases the robot would go directly to the conference room, other times, Dr. Howard says, the researches, "...had the robot take them to a different room, kind of wandering. We had the robot do things like, as they followed them, the robot would just stop and point to the wall." 

While in the room, the researchers filled the halls with smoke, which caused the fire alarms to go off. Participants then had the option to follow the robot, or to exit the building the way they came in.

Dr. Howard and her fellow researchers expected that about half of the participants would chose to follow the robot, "...but what happened in the study was... everyone followed the robot. It's astounding."

Despite having no indication that the robot knew where it was going, and even seeing first hand that it was flawed and could make mistakes, every single participant was willing to follow the robot.

Dr. Howard compares this behaviour to how we treat the GPS devices in cars. "When they first came out, you'd get a story once every couple of months about somebody who followed their system into the river... I know this is the wrong way, but maybe it knows that there's traffic the way that I normally go, so I'm just going to trust the technology, because I think that it must know what it's doing."

Dr. Howard says that the answer to this problem may be more transparency about how certain these robots are about their decisions. "Telling the user look, I think I might be broken, I'm 50% sure I'm broken, and then you make the decision."