Science

AI must turn focus to safety, Stephen Hawking and other researchers say

Artificial intelligence researchers should focus on ensuring AI systems "do what we want them to do," rather than just advancing and improving the capabilities of the technology, says a letter signed by physicist Stephen Hawking and AI researchers around the world.

Future of Life Institute open letter asks researchers to examine legal, ethical, economic impacts

'It would not be beyond the realms of possibility that somewhere outside of our own universe lies another, different universe and, in that universe, Zayn is still in One Direction,' physicist Stephen Hawking says. (Ted S. Warren/Associated Press)

Artificial intelligence researchers should focus on ensuring AI systems "do what we want them to do," rather than just advancing and improving the capabilities of the technology, researchers and experts say.

An open letter from the Future of Life Institute this week called for expanded research to ensure AI systems are "robust" and benefit society.

Works of science fiction like the film Terminator Genisys have long portrayed artificial intelligence taking over the world and destroying humankind. The open letter says it's time to focus research on making AI safe and beneficial to humans. (Paramount)

It says: "Our AI systems must do what we want them to do."

Max Tegmark, co-founder of the institute, said in an email that the letter is mainly directed at the AI research community and anyone who funds it.

It has been signed by hundreds of researchers and other AI experts, along with physicist Stephen Hawking and Elon Musk, founder of Tesla Motors and SpaceX, who have both spoken publicly about the risks of AI.

In December, Hawking told reporters that "the development of full artificial intelligence could spell the end of the human race." Tesla Motors and SpaceX founder Elon Musk has said that AI is probably "our biggest existential threat" and "potentially more dangerous than nukes."

Both Hawking and Musk sit on the scientific advisory board for the Future of Life Institute, which describes itself as a "volunteer-based research and outreach organization working to mitigate existential risks facing humanity," especially human-level artificial intelligence.

'Huge' potential benefits

While the group itself is focused on the risks of AI, the letter also says the potential benefits of the technology are "huge," and may even be able to eradicate problems such as disease and poverty. It cited recent advances in artificial intelligence capabilities ranging from speech recognition to self-driving carsMost research to date has been on capabilities "neutral with respect to purpose," it added, but suggested this is an opportune time to change that.

Bart Selman, a Cornell University artificial intelligence researcher who signed the letter, said the aim is to get AI researchers and developers to start paying closer attention to AI safety issues.

"For policymakers and the broader public, the letter is meant to be informative about some of these issues without being alarmist," he added in an email. "We believe that AI systems will provide many benefits to society but we should be aware of potential risks and limit those."

The letter points to a document recommending specific priorities for research including:

  • Computer science research to test systems, verify that they don't have unwanted behaviours and consequences, and make sure they are secure and can't be manipulated by unauthorized parties.
  • Law and ethics research into issues such as who is liable for the behaviour of self-driving cars, whether autonomous weapons should be banned and what policies should protect people's privacy in light of AI's ability to interpret large amounts of surveillance data.
  • Economic impacts, such as forecasting what jobs will be taken over by AI and finding ways to manage the adverse effects.