
The machine on the left in the photo is called SGR-1, it is built by South Korean Samsung, and it is the first “semi-autonomous” robot-soldier-killer. It has a machine gun and a grenade launcher, as well as weapon-detection systems within its “optical” horizon. The SGR-1 requires a human operator to fire, and has already been put into “service” for patrols along the South and North Korea border.
From a technical standpoint, fully autonomous killer robots are ready—or almost ready. But an international “moral” issue has now arisen. Is it right for machines to decide entirely on their own whether to kill and whom to kill?
The matter has taken the most official route: the UN. A committee is studying the issue and is expected to issue a decision that would ban the use of such machines. Presumably, if such a decision is made, it will somewhere state “let us keep the decision to kill as a human right”…
But robotic construction companies are also engaged in their own “arms race.” They have invested very large sums in related research, and they would not want to lose their money because of such a blanket ban. On the surface the issue is “stuck” at the UN, and that means there’s a lot of wheeling and dealing going on behind the scenes. What if the ban on autonomous killer robots is not absolute but their action is allowed in certain cases? After all, won’t their use save (our own) lives? Isn’t that an important humanitarian motive? Why was it acceptable in Hiroshima and Nagasaki and not now?
What?