Killer robots: should algorithms decide who lives or dies?

In Geneva, complex negotiations are underway to decide if a treaty is needed to control, or even ban, lethal autonomous weapons – or killer robots. Imogen Foulkes talks to experts, lawyers, and campaigners.

"It’s about the risk of leaving life and death decisions to a machine process. An algorithm shouldn’t decide who lives or dies," says Neil Davison, Senior Policy Adviser at the International Committee of the Red Cross. 

"Do you hold the commander responsible, who activated the weapons system?" asks Mary Wareham of Human Rights Watch. 

"What if a weapon is used and developed without meaningful human control, what are the consequences of it? How do you ascribe responsibility?" ponders Paola Gaeta, an international law expert at Geneva's Graduate Institute.

"If we don’t have a treaty within two years we will be too late. Technology is progressing at a much faster pace than diplomacy is doing, and I fear the worst," warns Frank Slijper of Pax, a Dutch peace organization.


2356 232