Are killer robots inevitable? Tech world uneasy about use of AI in warfare
The use of artificial intelligence in warfare may have once been the stuff of science-fiction films like The Terminator, but now the age of autonomous killing machines may be coming sooner than you think.
Last week, more than 3,000 employees at Google signed an open letter saying they believed that the company "should not be in the business of war."
But one expert said that trying to keep AI off the battlefield comes up against one significant issue: it's too useful.
"Most of the leading researchers in this field compare AI's impact to the impact of electricity," said Gregory Allen, an adjunct fellow at the Center for a New American Security in Washington.
"It would be laughable to try and say that we should ban the use of electricity in warfare," he told The Current's guest host Gillian Findlay.
Countries aside from the U.S. are pursuing more aggressive development strategies, he said. Because so much of the research conducted is open-source and available online, he adds, there's no realistic way to stop the information from being used.
Google employees were reacting to the news that the company has partnered with the Pentagon on Project Maven. While not a lot is known about the project, the tech company has said its aim is to use AI to interpret video imagery.
Ian Kerr, a law and technology expert at the University of Ottawa Faculty of Law, said that it could use personal data, including facial recognition technology, to improve the accuracy of drone strikes against individuals.
"Google would be using our personal information, without our knowledge and consent, to help the Pentagon make targeting decisions," he said, "potentially about who to kill, maybe with, maybe without human involvement."
- THE CURRENT: Elon Musk wants to merge human brain with artificial intelligence
- THE CURRENT: Would you let your kids play with AI robots?
"The idea of delegating life or death decisions to a machine crosses a fundamental moral line," he said.
"Maven is a well-publicized DoD project and Google is working on one part of it — specifically scoped to be for non-offensive purposes," Google told The Current in a statement. "The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work."
Allen disagrees that Project Maven is about taking personal information from Google's data sets and using that in a military context. The project involves taking video and still images from drones, and then applying "a minimal amount of analysis," he said.
"This technology is counting things... it's saying: 'In this picture there are five buildings, in this picture there are three cars and there are two people.'"
"It is not even at the level of characterizing activity, as in: 'This person is walking'… much less saying: 'This person is consistent with the Google user ID X.'"
Kerr's concern is where these developing technologies can end up. Companies like Google have access to enormous datasets, he said, which nation states or other organizations could use for violent ends.
- THE CURRENT: Futurists shed light on the robotic revolution
- THE CURRENT: Would you talk to a robot therapist? Woebot is accepting new patients
"The idea isn't that weapons should be off limits," he said. "The idea is that we do not delegate control-of-kill decisions to weapons," he said.
"We have to maintain meaningful human control when it comes to killing."
Listen to the full conversation at the top of this page, where you can also share this article across email, Facebook, Twitter and other platforms.
This segment was produced by The Current's Geoff Turner and Howard Goldenthal.