Killer robots: Experts warn of ‘third revolution in warfare’

More than 100 leading robotics experts are urging the United Nations to take action in order to prevent the development of “killer robots”.

In a letter to the organisation, artificial intelligence (AI) leaders, including billionaire Elon Musk, warn of “a third revolution in warfare”.

The letter says “lethal autonomous” technology is a “Pandora’s box”, adding that time is of the essence.

The 116 experts are calling for a ban on the use of AI in managing weaponry.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter says.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways,” it adds.

There is an urgent tone to the message from the technology leaders, who warn that “we do not have long to act”.

“Once this Pandora’s box is opened, it will be hard to close.”

  • Warning against AI arms race
  • Soldiers that never sleep
  • AI fighter pilot wins in combat

    Experts are calling for what they describe as “morally wrong” technology to be added to the list of weapons banned under the UN Convention on Certain Conventional Weapons (CCW).

    Along with Tesla co-founder and chief executive Mr Musk, the technology leaders include Mustafa Suleyman, Google’s DeepMind co-founder.

    A UN group focusing on autonomous weaponry was scheduled to meet on Monday but the meeting has been postponed until November, according to the group’s website.

    A potential ban on the development of “killer robot” technology has previously been discussed by UN committees.

    In 2015, more than 1,000 tech experts, scientists and researchers wrote a letter warning about the dangers of autonomous weaponry.

    Among the signatories of the 2015 letter were scientist Stephen Hawking, Apple co-founder Steve Wozniak and Mr Musk.

    What is a ‘killer robot’?

    A killer robot is a fully autonomous weapon that can select and engage targets without human intervention. They do not currently exist but advances in technology are bringing them closer to reality.

    Those in favour of killer robots believe the current laws of war may be sufficient to address any problems that might emerge if they are ever deployed, arguing that a moratorium, not an outright ban, should be called if this is not the case.

    However, those who oppose their use believe they are a threat to humanity and any autonomous “kill functions” should be banned.

    Get news from the BBC in your inbox, each weekday morning

    View comments

South Korean university boycotted over ‘killer robots’

Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems.

More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons.

In response, the university said it would not be developing “autonomous lethal weapons”.

The boycott comes ahead of a UN meeting to discuss killer robots.

Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: “I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.

“Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence.”

He went on to explain that the university’s project was centred on developing algorithms for “efficient logistical systems, unmanned navigation and aviation training systems”.

  • Terrorists ‘certain’ to get killer robots, says defence giant
  • Killer robots: Experts warn of ‘third revolution in warfare’

    Prof Noel Sharkey, who heads the Campaign to Stop Killer Robots, was one of the first to sign the letter and welcomed the university’s response.

    “We received a letter from the president of Kaist making it clear that they would not help in the development of autonomous weapons systems.

    “The signatories of the letter will need a little time to discuss the relationship between Kaist and Hanwha before lifting the boycott,” he added.

    Until the boycott is lifted, academics will refuse to collaborate with any part of Kaist.

    Pandora’s box

    Next week in Geneva, 123 member nations of the UN will discuss the challenges posed by lethal autonomous weapons, or killer robots, with 22 of these nations calling for an outright ban on such weapons.

    “At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like Kaist looks to accelerate the arms race to develop such weapons,” read the letter sent to Kaist, announcing the boycott.

    “If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror.

    “Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora’s box will be hard to close if it is opened.”

    South Korea already has an army of robots patrolling the border with North Korea. The Samsung SGR-A1 carries a machine gun that can be switched to autonomous mode but is, at present, operated by humans via camera links.

Google should not be in business of war, say employees

Thousands of Google employees have signed an open letter asking the internet giant to stop working on a project for the US military.

Project Maven involves using artificial intelligence to improve the precision of military drone strikes.

Employees fear Google’s involvement will “irreparably damage” its brand.

“We believe that Google should not be in the business of war,” says the letter, which is addressed to Google chief executive Sundar Pichai.

“Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

No military projects

The letter, which was signed by 3,100 employees – including “dozens of senior engineers”, according to the New York Times – says that staff have already raised concerns with senior management internally. Google has more than 88,000 employees worldwide.

In response to concerns raised, the head of Google’s cloud business, Diane Greene, assured employees that the technology would not be used to launch weapons, nor would it be used to operate or fly drones.

However, the employees who signed the letter feel that the internet giant is putting users’ trust at risk, as well ignoring its “moral and ethical responsibility”.

“We cannot outsource the moral responsibility of our technologies to third parties,” the letter says.

“Google’s stated values make this clear: every one of our users is trusting us. Never jeopardise that. Ever.

“Building this technology to assist the US government in military surveillance – and potentially lethal outcomes – is not acceptable.”

‘Non-offensive purposes’

Google confirmed that it was allowing the Pentagon to use some of its image recognition technologies as part of a military project, following an investigative report by tech news site Gizmodo in March.

A Google spokesperson told the BBC: “Maven is a well-publicised Department of Defense project and Google is working on one part of it – specifically scoped to be for non-offensive purposes and using open-source object recognition software available to any Google Cloud customer.

“The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.

“Any military use of machine learning naturally raises valid concerns. We’re actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies.”

The internet giant is working on developing policies for the use of its artificial intelligence technologies.

DeepMind explores inner workings of AI

As with the human brain, the neural networks that power artificial intelligence systems are not easy to understand.

DeepMind, the Alphabet-owned AI firm famous for teaching an AI system to play Go, is attempting to work out how such systems make decisions.

By knowing how AI works, it hopes to build smarter systems.

But researchers acknowledged that the more complex the system, the harder it might be for humans to understand.

The fact that the programmers who build AI systems do not entirely know why the algorithms that power it make the decisions they do, is one of the biggest issues with the technology.

It makes some wary of it and leads others to conclude that it may result in out-of-control machines.

Complex and counter-intuitive

Just as with a human brain, neural networks rely on layers of thousands or millions of tiny connections between neurons, clusters of mathematical computations that act in the same way as the neurons in the brain.

These individual neurons combine in complex and often counter-intuitive ways to solve a wide range of challenging tasks.

“This complexity grants neural networks their power but also earns them their reputation as confusing and opaque black boxes,” wrote the researchers in their paper.

According to the research, a neural network designed to recognise pictures of cats will have two different classifications of neurons working in it – interpretable neurons that respond to images of cats and confusing neurons, where it is unclear what they are responding to.

To evaluate the relative importance of these two types of neurons, the researchers deleted some to see what effect it would have on network performance.

They found that neurons that had no obvious preference for images of cats over pictures of any other animal, play as big a role in the learning process as those clearly responding just to images of cats.

They also discovered that networks built on neurons that generalise, rather than simply remembering images they had been previously shown, are more robust.

“Understanding how networks change… will help us to build new networks which memorise less and generalise more,” the researchers said in a blog.

“We hope to better understand the inner workings of neural networks, and critically, to use this understanding to build more intelligent and general systems,” they concluded.

However, they acknowledged that humans may still not entirely understand AI.

DeepMind research scientist Ari Morcos told the BBC: “As systems become more advanced we will definitely have to develop new techniques to understand them.”