Spark

Building AIs that build other AIs

Training AIs to build other AIs. Brilliant idea? Or the birth of Skynet?
Having AIs build new AIs isn't as apocalyptic as it may seem.

Imagine a world where humans are no longer required to build computers, because the computers are building themselves.

Does that sound a bit apocalyptic? Of course.

It's not likely to happen any time soon. However, thanks in part to a shortage of skilled programmers and physicists capable of designing AIs, computer scientists at Google and other places are turning to another option: Get the AIs to build AIs.

And although that may sound like a dangerous idea, it's actually a way to advance the development of the complex algorithms that govern AIs much more safely and efficiently than the current model, which involves humans testing various code changes individually, and coming up with improvements through trial and error.

"Now we can have a computer try those types of variations automatically," says Pieter Abbeel, a computer scientist at the University of California, Berkeley who specializes in machine learning and robotics.
Pieter Abbeel. (UC Berkeley)

In practice, it works by using a simple AI to propose algorithms to a smarter machine and monitor a test of that algorithm. "You're automating the process to make faster progress," he says, adding that it's better to think of the simpler AI as a piece of code, rather than a fully developed artificial intelligence.

As well, it may mean that the AI proposes changes that humans wouldn't even think of. "It might be that that fundamentally having AI build AI can actually build different kinds of AI than humans can build," Pieter says.

Ultimately his hope is that an AI-built AI will be able to use the prior experience of the designing computer, which would transfer the experience of learning, so the new AI will already have a base of knowledge that it can leverage. "That really improves the speed of learning."

But this does raise a serious issue in AI development.

Human computer scientists sometimes don't understand how an AI interprets an algorithm. So doesn't stacking one set of complex machine learning on top of another increase this risk?

Pieter acknowledges that this is potentially a really challenging problem in this type of AI development. But he's hopeful that not only will this next generation of AIs be capable of solving more difficult problems, but also be able to explain their logic, as well.

"When an AI does something, you want it to explain why it does what it does," he says.

For example, if a self driving car is stopping at stop signs because it's recognizing the post the sign is mounted on rather than the sign itself, it flags a problem that needs to be addressed - something that might never be caught if the AI didn't have a way of explaining its decision-making process.

"It may turn out that the only time there were posts in our training data was when they were underneath stop signs," he says. "You've got to go and collect new data that also contains other signs so the AI knows it's the sign that matters, and not the post."

Pieter says the process is still in its infancy - AI-built AIs are, at this point, solving simple problems like mazes and pushing objects to targets.

"The next step is seeing whether this can be broadened and if a wider range of skills can be learned very quickly."