Spark

Taking cues from early childhood development to build better robots

Robots already work on factory floors, in dangerous situations, and cleaning the floor, but to truly interact with us, they're going to need to understand our world.

AI may not be the only key to building advanced robots, says computer scientist

Just like babies, we can let robots explore and play in their surroundings to learn what it's like to be a human. (Raising Robotic Natives)

Originally published in December, 2020.

 

Robots are integrating into many aspects of our daily lives, but to truly interact with humans they're going to need to understand our world. Building robots that really get us is something experts are currently grappling with.

Mark H. Lee, an emeritus professor of computer science at Aberystwyth University in Wales, says we should use developmental psychology as our model to train robots. 

Just like babies, we can let robots explore and play in their surroundings to learn what it's like to be a human.

"They would be able to use common sense reasoning at a very basic level to be much more flexible, self-contained. But they'd also be interested in learning new things, because that would be stimulating," he told Spark host Nora Young.

Mark H. Lee is an Emeritus Professor of Computer Science at Aberystwyth University in Wales, and the author of How to Grow a Robot. (Ian Wallman/The MIT Press/Collage by CBC)

Developmental robotics is a theory that he explores in his latest book, How to Grow a Robot. He argues that artificial intelligence, more specifically Deep Learning, is the foundation on which robots will be built in the future.

But he notes that there are limitations to current AI. "Most programs don't really understand what they're doing. So you may translate text, you might recognize images, but you don't really know what those images are about, or what the text really means." 

And, he says, that's a big problem when it comes to using them in many real life applications that really matter, like driverless cars.

Most AI study the already developed adult human brain. But the brain develops through experience and much of the adult brain has formed through earlier experience. A child learns to control its limbs and sensory systems over time and picks up new skills along the way. 

The question is how this sort of learning is generated, says Lee. If we knew that, we wouldn't need big data. "We would be learning through experience."

Lee says curiosity might be the answer. He applied this theory in his own work. "We ran our robot through this process, from no abilities at all, to be able to pick up objects that he sees on the table and move them around."

"Anything that is novel that it has not been seen before is exciting, a stimulus. And he tries to repeat the action that caused that stimulus, if at all possible."

"I see robots as a way of exploring the degree to which computers can reach out to humans, and I don't see it as as far reaching as a lot of science fiction would do, and a lot of over-hyped AI work would suggest, but I do think there's a lot of potential there for a robot to become grounded in the world," says Lee.

Teaching robots to watch and learn

A developmental approach to robotics is one way of trying to get them to behave sensibly alongside humans in real world environments. But it's certainly not the only way to tackle the problem, says roboticist and computer scientist Chad Jenkins.

Chad Jenkins is a computer science and engineering professor and director of the Michigan Robotics Institute at the University of Michigan. (Joseph Xu/Michigan Engineering Communications & Marketing)

Like Lee, Jenkins, a professor of computer science and engineering and associate director of the Michigan Robotics Institute at the University of Michigan, is also looking at ways to improve robots. His lab is currently working on how to program them by human demonstration.

"We want to be able to just show the robot what we want it to do without necessarily having it being programmed by your favourite textual computer programming languages, which usually requires a lot of skills and sometimes a university degree in order to [do]."

But in order to make that possible, robots need to be able to see what we are doing, understand our intentions and be able to carry out those actions, as well as reason about what actions shouldn't be taken. 

While issues with bias in AI are often discussed, there's little focus on similar issues in robotics. The two fields deal with a lot of the same problems, says Jenkins. But when it comes to robotics, problems can be more costly.

There are all kinds of robots out in the world, such as drones, autonomous vehicles, warehouse robots and increasingly in places like hospitals and retail floors. The cost of an incorrect perception or decision by a robot could bring physical harm to people in various scenarios, as severe as fatalities.

'Robotics should live up to existing civil rights laws'

In addition to his robotics research, Jenkins works on issues to do with equity and discrimination within computer science departments, and in how AI and robotics are used in the real world. Back in June, he was a lead organizer and writer of "An Open Letter & Call to Action to the Computing Community from Black in Computing and Our Allies." 

In September, he gave a talk called "That Ain't Right," about the harmful effects of AI on Black communities. He focused on the story of Robert Williams, a Black man in Detroit who was wrongfully arrested after an AI facial recognition system falsely matched his photo to security footage of a shoplifter.

"It's something that personally kept me up at night because I'm always asking myself, are we doing the right things in artificial intelligence and robotics?"

Jenkins describes what happened as an overestimation of what artificial intelligence can do. He says there's an ongoing debate about whether these systems are overfitting. 

They memorize certain types of data, but have difficulty generalizing to new examples that may seem foreign, like somebody with dark skin, who is less familiar to these systems, says Jenkins. 

This type of memorization is a problem for facial recognition. "It's even that much bigger a problem when it's on a robot that could be taking all sorts of actions and interacting with the physical world."

"We can't think that it's just a black box that will do the right thing all the time," says Jenkins. 

He notes that while Black people represent 13 per cent of the American population, they make up just two per cent of the computing workforce, which generates AI and robotics technologies.

Jenkins says that AI and robotics should live up to existing civil rights laws and strive for more diversity across genders, races and many different dimensions that will lead to better products that will lead to better outcomes.

We can't think that it's just a black box that will do the right thing all the time.​​​​​- Chad Jenkins, roboticist 

"All you have to do is look in any university research lab, any development team at an industry and there's such a monoculture that is brewing up these technologies and sending them out into the world, that is leading to the negative outcomes that we're just now seeing in terms of their fairness in terms of their behaviour."

He says this year's protests for racial justice helped foster a conversation within the field of computing, AI and robotics.

"We're starting to see a recognition from our colleagues that we're no longer an immature field. We're no longer in startup modes. We have to mature and evolve and understand that this is no longer just an intellectual exercise, our technology is going to have real consequences," says Jenkins.