Lethal autonomous weapons : Could machines replace soldiers in the future to create a safer world?


Will the post-apocalyptic future imagined by James Cameron in his famous movie The Terminator ever become our reality? That is the question raised by the Bolivian delegate at the United Security Council that occurred on the 19th of May 2018.

The main goal of the UNSC is to ensure security worldwide, and to do so the council must keep up with the evolution of new technologies and to face the consequences of a new type of warfare. The delegations have to bear in mind that artificial intelligence can now be used in wars. Indeed, intelligent machines are created to work like humans, including on the battlefields. The question is: could machines replace soldiers in the future to create a safer world?

 

 

What will the future look like? A world in which machines took over the human species or in which machines ensure international security and peace?

 

Combat drones are nowadays well-known but what the delegates were wary of is that technology might be going too far and, thus threaten international peace. Lethal autonomous weapons (LAWs), i.e. “Killer Robots” represent the epitome of scientific progress, nevertheless some delegates appeared skeptical about whether or not it is moral and ethical to use them. A ban on this kind of weapon was thereby discussed by the council.

 

The different delegates expressed their opinions on the use of these autonomous military robots and the potential threat it could become as far as international security is concerned.

First, ethical issues were being tackled by the council. “Humans as a species are not reliable” claimed the delegate of Equatorial Guinea fostering the use of LAWs. These machines could prevent soldiers, human lives from being killed fighting to defend their country. “It is not about machines shooting helpless children but about protecting the life of soldiers and defending our people” added the delegate. Furthermore, the delegate representing the United States added that “emotions could affect the soldiers’ judgement while robots are designed to select and attack military targets.”

The Swedish delegate’s point of view was quite different as she appeared outraged by these statements. For her, the life of civilians and of people should not rely on a machine programmed to kill. Who will be held accountable in case of miss-targeting if a human being is not the one pulling the trigger? The whole council seemed to fear robots making mistakes and the US therefore suggested that these systems ought to be tested carefully.

The delegate of Kuwait was in favor of the use of LAWs to a certain extent. Such weapons could be wrong-produced or even used by the wrong people. “What if LAWs fall into the hands of armed groups and terrorists?” she asked. The UK was concerned about Daesh using them. Preventing armed groups from using them seems very hard and coming up with clear-cut solution on how to regulate their production and their use is impossible. Even if LAWs were banned, it could still be the tool of terrorists. The key issue is to protect civilians and to make sure these devises do not become killing machines. For France, a ban was not a solution but a strict control had to be implemented regarding trade and production.

 

 

The delegates representing the US and the UK were both against the ban of LAWs and soon an alliance was made clear between them. Later, the US and Russia proudly shook hands celebrating their common point of view. However, the US made it very clear: “we are in favor of LAWS only if it is under human control, it has to be monitored”. These nations as well as Bolivia looked eager to face the challenges of the future. For them, the use of LAWs is inevitable. Since 2015, the UK government opposed such a ban, with the foreign office stating that “international humanitarian law already provides sufficient regulation for this area “.

 

 

China made a very bold entrance into the debate. As a matter of fact, the Chinese delegate reshuffled the cards stating it was strongly in favor of the ban of LAWs and referring to “the danger of technology”. Sweden then took advantage of this opportunity to attack the US. “The US is a country that calls itself a superpower, a model of morality but it is actually killing thousands of civilians in Pakistan only by controlling intelligent machines from California. Is it normal that powerful countries could become unstoppable?” the Swedish delegate said. The US felt insulted and disgraced by this attack and responded in saying that it was for the sake of international peace. The American delegate then claimed that China was being hypocritical as they were actually investing a lot in these technologies.

 

 

Whether in favor or against the ban of LAWs, the whole council agreed on one point. A consensus was reached: LAWs ought to be regulated. What was still very blurred at the end of the debate was the very definition of LAWs. What are LAWs? A lethal military weapons system which is autonomous in at least one of its critical functions, being identification, selection or attacking of targets? Or a machine able to, once activated, operate without further human intervention, regardless of whether human intervention is possible or not? The latter definition was promoted by the US and its allies who believed a human should be able to switch the system off at any time.

It seems like machines are not yet left on their own and that human beings still have a control on LAWs. Nevertheless, as Bolivia pointed out, James Cameron could have wanted to warn us about the danger of letting machines decide if a target has to be eliminated. We must be wary…

 

Clémence Blanc

 

Leave a comment

Your email address will not be published. Required fields are marked *