Science

When your robot learns from humans, who should train it?

At Vancouver robotics company Kindred, human pilots can take over a robot and teach it new tasks — and every trainer is different.

'Everything — my mannerisms, my behaviours — is all going into the AI,' says Kindred co-founder

Physicist Suzanne Gildert is the chief science officer of Vancouver-based robotics company Kindred, which she co-founded in 2014 to create intelligent robots that can learn from humans. (SingularityU Canada Summit)

Let's say you want to teach a robot to play basketball. How do you decide who should train it? Should you have it learn from an all-star, so that the robot mimics that player's particular style? Or should it learn from a blend of data from multiple players with varying play styles across myriad teams?

That question is top of mind for Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred's robots encounters a scenario it can't handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

That AI is also learning my values.— Kindred co-founder Suzanne Gildert

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

"That AI is also learning my values," Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday. "Everything — my mannerisms, my behaviours — is all going into the AI."

How the algorithms powering everything from self-driving cars to social networks are designed and trained has become a hot topic of discussion in artificial intelligence circles.

Virtual reality controllers like these ones can be used to manually show one of Kindred's robots what the trainer wants it to do, a process called teleoperation. (Tomohiro Ohsumi/Getty Images)

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you're building a warehouse robot like Kindred is, is it more effective to train those robots' algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

Although Gildert didn't elaborate on how Kindred is approaching the problem — the company has remained relatively tight-lipped about its efforts — she did suggest, unsurprisingly, that there's no easy answer.

As AI becomes more intelligent and complex, Gildert says, "it's unclear if this blending process would even work, or if you would just end up with something very confused."

ABOUT THE AUTHOR

Matthew Braga

Senior Technology Reporter

Matthew Braga is the senior technology reporter for CBC News, where he covers stories about how data is collected, used, and shared. You can contact him via email at matthew.braga@cbc.ca. For particularly sensitive messages or documents, consider using Secure Drop, an anonymous, confidential system for sharing encrypted information with CBC News.