Science·Analysis

The idea of robot butlers fuels our fantasies — and our fears

Robots are getting better at simulating human actions, but at this point they can barely make a pot of coffee, much less take over the planet. The more immediate threat, in fact, may not come in a human-like form at all.

But maybe those aren't the kind of robots we should be afraid of

The robots we typically think of are scary because they're the closest we come to technological replicas of ourselves. The more immediate threat, however, may not come in a human-like form at all. (Michael Dalder/Reuters)

When it comes to robots, we tend to have two main fears: The first, fueled by media reports, is that they'll steal all of our jobs. The second, inspired by decades of science fiction lore, is that they'll rebel and rise up against us.

It's true, robots are getting better at simulating human actions, but at this point they can barely make a pot of coffee, much less take over the planet. The more immediate threat, in fact, may not come in a human-like form at all.

The robots we typically think of — with some manner of arms, legs, torso, head — are scary because they are the closest we come to technological replicas of ourselves.

We humans think of ourselves as cognitively special, says Karina Vold, a research associate at the U.K.-based Leverhulme Centre for the Future of Intelligence and a research fellow at the University of Cambridge. "As robots become smarter, they threaten what we hold to be unique about ourselves."

Nevertheless, the idea of robot that could do the laundry or take out the trash has its appeal. So while the Defense Advanced Research Projects Agency (DARPA) and Boston Dynamics develop military machines and robots for battle, others are working on the task of training machines to help with household chores.

Teaching a robot to perfect these seemingly simple skills requires a lot of trial and error. While science fiction's favourite machine beings seem relatively adept at engaging in the human world, in reality, robots are still pretty clumsy. The process of practising a new skill, such as bringing you your morning coffee, could end up damaging the objects they interact with, smashing cups and coffee pots, or even damaging the robot itself.

A robot barista named 'Sawyer' makes a coffee at Henn-na Cafe ('Strange Cafe' in Japanese) in Tokyo. Performing what we think of as simple household tasks takes a long sequence of logical steps and physical challenges. (Koji Sasahara/Associated Press)

A new simulated environment, built using the same software as blockbuster games like Grand Theft Auto, aims to help artificial intelligence (AI) agents learn to master simple household skills.

AI2-THOR is an interactive and photorealistic 3D simulation that replicates environments including kitchens, bathrooms and bedrooms. Instead of a robot with a physical form clanking around a test facility and breaking things in the process, the artificial intelligence that is essentially the "brain" of that robot can learn and practise new tasks in a digital rendering of the environment.

Today's AIs still 'weak'

But lest news of this new robot training ground make you nervous about the coming reign of the robot overlords, rest secure. The machines still have a lot of learning to do.

One of the creators of A12-THOR estimates it will take at least five to 10 years to come up with an AI model that can do basic household tasks, as researchers have only just started thinking seriously about how to teach AI logic, common sense and physics.

"The current state-of-the-art AI models cannot handle too many details," says Roozbeh Mottaghi, a research scientist in the computer vision team at the Allen Institute for Artificial Intelligence. Actions that seem trivial to us are more challenging for the machine because "the AI agent needs to learn several things just to complete one simple task."

While you might be able to brew a pot of coffee half-asleep and with your eyes closed, a robot has to complete a long sequence of logical steps and physical challenges: locating the coffee beans, manipulating a scoop to pick up and deposit the grounds, turning on the tap, filling the water tank, etc., and managing to hold onto the mug without dropping it.

For many, the idea of the butler robot originated with the vision of the future projected in the 1960s cartoon The Jetsons, in which household robots helped with domestic tasks. (Warner Bros./Getty Images)

​Today's AIs are still considered "weak" in that they don't have any real understanding, Angelica Lim, an AI roboticist and assistant professor at Simon Fraser University, wrote in MIT Technology Review late last year.

"They are powered by giant rule books containing massive quantities of data stored on the internet — so they can act intelligent but can't understand the true meaning of what they say or do," she says.

Fear of robot resistance misplaced

The very idea of the "butler robot" concept is a throwback to the vision of the future projected in The Jetsons many decades ago. In that scenario, a robot would turn on the lights, run the vacuum cleaner and use other household tools originally designed for human manipulation, as opposed to operating as embedded AI within the home itself.

Devices such as Amazon's Alexa are now widely marketed as 'home assistants,' with their ability to connect with other 'smart' appliances in the home and follow voice directions. (Elaine Thompson/Associated Press)

That's where home assistants such as Alexa and Google Home come in. They are embedded in that they connect with software-enabled appliances and electronics in the home.

In this sense, perhaps our deep-seated fear of the robot resistance has been misplaced, researcher Vold says. She likens the adoption of virtual assistants to inviting a stranger into your home — someone you think is there to help but who, in fact, is witnessing, and even recording, your every utterance, and whose intelligence far exceeds your own.

"There is a creepy factor to this that may or may not be justified," she says. "It depends on your trust in the big tech companies you are inviting in, like Google, Apple and Amazon, who now all offer 'always listening' home devices."

Because we are used to seeing an enemy in something that looks like us, "humans continue to fear embodied robots over distributed algorithmic systems," says Vold.

We should worry less about robot uprisings, she says, and more about "the malicious hacking of a smart city or smart home."

ABOUT THE AUTHOR

Ramona Pringle

Technology Columnist

Ramona Pringle is an associate professor in Faculty of Communication and Design and director of the Creative Innovation Studio at Ryerson University. She is a CBC contributor who writes and reports on the relationship between people and technology.