automatic execution

The vehicle in the image could be a small tank. And indeed it is. With a difference. It is robotic and completely autonomous. “Completely autonomous” means that it is guided by an “artificial intelligence” software, so that it decides “on its own” who is “enemy” and who is “friend”… thus killing immediately and effectively…

The experimental construction of it (as well as of other similar techniques) has caused a wave of “criticism” among various experts in artificial intelligence. What is their problem? “Ethics” – “the ethics of war,” “the ethics of murder”… According to the relevant “concern,” even in drones there is still a human operator (at a great distance, in the safe environment of some office…) who makes the final decision (of killing). Is it ethically correct, say the critics, for this final decision to be a capability of the machine?

Subtle concerns of people who believe they have control over their decisions! If one asks Afghans and Pakistanis (or Palestinians) they will say that the military use of drones is completely unethical, since their operator does not risk anything at all.

If, again, you ask the communities of black proletarians in American cities, they will say that their cops are already killing them “mechanically”: their murderous decisions are no longer restrained by any moral inhibition, since they are acquitted one after the other.

What, then, is the difference between being killed by a robot or being killed by a bipedal robotized and hacked astronaut?

cyborg #03 – 06/2015