Killer robots: something to worry about?

October 4, 2017

Elon Musk, together with other robot developers just wrote an open letter to the UN. The letter warns about the future development of ‘killer robots’, and urge the UN to work towards a ban of autonomous weapons. Killer robots do sound dangerous. The question is if “killer robots” are the application of robotics and artificial intelligence (A.I), which we should be mainly concerned about?

Not really, if one asks military anthropologist at the Danish Defence Academy Katrine Nørgaard. “The military is quite sceptical in relation to ‘killer robots’”.

According to Katrine Nørgaard, the vision of autonomous killer robots, often in the image of ”the Terminator” acting autonomously on a battlefield, is far from actual reality. Human operators remotely control unmanned drones and robots used in actual military operations. The technologies are used for e.g. surveillance and intelligence gathering. Like all other technological systems used in warfare, drones and the like, must comply with international law.

”A ban against the further development of autonomous weapons systems will not keep hostile nations and groups from breaking international law. Military interest in the use of robots for warfare is for use in intelligence gathering, handling of massive amounts of data, protection of own forces and minimization of civilian losses”, says Katrine Nørgaard.

Killer robots are, it appears, not in the military interest in Denmark at least. Instead technology-human constructs are the technologies of the keenest interest. The open letter to the U.N. points to the need for an open and broad international debate on the fast development in robotics and A.I. What rules and agreements should be in place for the use of these new technologies? Do we need international conventions; like the ones for biological and chemical weapons?

Now, we have heard the waring from Elon Musk and the likes, but what do other experts think? What does the broader public think? At the Danish Board of Technology Foundation, we think a broader debate is needed. What would we like to use robots for? What should they be able to do? How do we feel A.I. based systems should work and be controlled and governed?

At the moment we are organising a number of public debates across Europe on the dual uses of A.I. and robotics. We hope to be able to start a broad public debate on the future use of A.I. and robot technologies in warfare.

The Danish Board of Technology is a partner of the EU flagship project ”The Human Brain Project (HBP)”. In the project we work on creating public dialogue on social and ethical issues. Read more about our work.