Tue, Aug 22, 2017 - Page 7 News List

Tech leaders warn against robot killers

‘PANDORA’S BOX’:Autonomous weapons are unlike other applications of AI, as they already exist. South Korea has one on its border, while others are developing their own

AFP and The Guardian, SAN FRANCISCO

More than 100 robotics and artificial intelligence (AI) leaders including Tesla founder Elon Musk are urging the UN to take action against the dangers of autonomous weapons.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” said the letter signed by 116 specialists across 26 countries, including Musk and Mustafa Suleyman, cofounder of Google’s DeepMind.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever and at times scales faster than humans can comprehend,” the letter said. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations and weapons hacked to behave in undesirable ways.”

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” they added.

The letter was sent ahead of the UN’s International Joint Conference on Artificial Intelligence (IJCAI) which was set to meet yesterday, but was canceled and postponed until November, according to the international body’s Web site.

The specialists call for “morally wrong” lethal autonomous weapons systems to be added to the list of weapons banned under the UN’s convention on certain conventional weapons brought into force in 1983, which includes chemical and intentionally blinding laser weapons.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different. It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis,” said Toby Walsh, Scientia professor of AI at the University of New South Wales in Sydney.

“However, the same technology can also be used in autonomous weapons to industrialize war. We need to make decisions today choosing which of these futures we want,” Walsh added.

“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” Clearpath Robotics founder Ryan Gariepy said.

This is not the first time the IJCAI, one of the world’s leading AI conferences, has been used as a platform to discuss lethal autonomous weapons systems.

Two years ago, the conference was used to launch an open letter signed by thousands of AI and robotics researchers including Musk and Stephen Hawking similarly calling for a ban, which helped push the UN into formal talks on the technologies.

The UK government opposed such a ban in 2015, with the British Foreign and Commonwealth Office saying that “international humanitarian law already provides sufficient regulation for this area.”

Lethal autonomous weapons are already in use. Samsung’s SGR-A1 sentry gun, which is reportedly technically capable of firing autonomously, although it is disputed whether it is deployed as such, is in use along the South Korean border of the 2.5m-wide Korean Demilitarized Zone.

The UK’s Taranis drone, in development by BAE Systems, is intended to be capable of carrying air-to-air and air-to-ground ordnance intercontinentally and incorporating full autonomy, while Russia, the US and other nations are developing robotic tanks that can either be remote-controlled or operate autonomously.

This story has been viewed 1843 times.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

TOP top