Robots rising: How far do we want them to go?
TED Conference in Vancouver explores the threat and promise of artificial intelligence
Picture this: you're expecting a package delivery and hear a knock on the door.
Though, instead of a human with two legs, it's a robot dog with four — that pranced down the street and up the steps autonomously on 3D-printed limbs.
It's a future Boston Dynamics CEO Marc Raibert envisions for one of his company's latest robots, SpotMini, a prototype often called "creepy" that he showed off on stage at the TED Conference in Vancouver this week.
The robot was partly driven by a human — but partly moved on its own, displaying two skills that are pretty cutting edge for a robot: picking up a fragile object without crushing it and using stereo cameras to decide where to step on unfamiliar terrain.
"I think it won't be too long until we have robots like this in our homes," he told the crowd in the session called "Our Robot Overlords." Should we be scared? he was asked.
"I'm not scared at all, I think that's up to each of you."
Delivery dogs aside, the tech world is very divided on this larger question: as artificial intelligence advances, how worried should we be?
Rise of the robots?
Prognostications of killer robots — or machines that steal our jobs — are nothing new. But there have been key developments in the field of AI that are driving the conversation now.
Tests continue on self-driving cars, which may be on the roads in just a few years.
And, in an echo of the famous chess match between IBM's Deep Blue and Garry Kasparov 20 years ago, last year a Google computer beat a human at a much more complex game: the ancient Chinese board game Go.
"I would [have thought] that playing Go at a world champion level really ought to be something that's safe from automation," said Martin Ford, author of Rise of the Robots: Technology and the Threat of a Jobless Future in his TED talk.
"The fact that it isn't should really raise a cautionary flag for us."
Proponents see AI as fundamentally helpful to humans. The developer of Apple's Siri, Tom Gruber, offered examples on stage of humans and machines working together to solve problems better than either could alone.
In cancer diagnosis, Gruber said, machines are better at detecting tumour cells, but humans are better at weeding out false positives.
But loud warnings are being sounded by some of the leading minds of science and technology, including Bill Gates, Stephen Hawking, and inventor Elon Musk—who will speak at TED later this week.
The off-button problem
Automating jobs is one key concern, which Ford said will put "terrific stress on the fabric of society," but there's a bigger problem looming in the minds of those who do work on AI.
If we create a machine that's smarter than us, will it ever let us turn it off?
"It really does terrify me," said Stuart Russell, a computer science professor at University of California, Berkeley and author of the textbook on AI used in hundreds of universities.
"I'm trying to redefine AI to get away from this classical notion of machines that intelligently pursue objectives."
Even a simple objective could go awry, he said.
"The machine says to itself, how might I fail to fetch the coffee?," said Russell. "Someone might switch me off. OK, I have to take steps to prevent that. I will disable my off switch."
"I will do anything to defend myself against interference with this objective I've been given."
Instead, he says robots should be programmed with uncertainty — to maximize human values and to learn from observing humans what those are.
"It turns out the uncertainty part is key," he said.
Without it, he imagines a machine told to prepare dinner not realizing that the family cat's sentimental value outweighs its nutritional value.
"There's a whole lot of things that human beings care about, and machines have to understand those things otherwise they're going to ... make these terrible mistakes like cooking the family cat for dinner."