the invasion of killer robots

A microscopic quadcopter with a diameter of just three centimeters can carry one to two grams of explosive material. You can easily order as many as you want from various drone manufacturers in China. You can program its code as follows: “take a thousand photos with the type of objects you want to target.” One gram of explosive material can open a hole in nine-millimeter steel, so you can possibly open a hole in someone’s head. You can fit about three million of these into the trailer of a truck. You can send three such trucks on I-95 [the main highway axis of the eastern coast of the USA] and you will have ten million weapons attacking New York. They don’t need to be particularly effective, just 5 to 10% need to hit their target.
Soon we will have manufacturers producing millions of such weapons and people will be able to buy them the same way they now buy conventional weapons, except that millions of conventional weapons don’t count unless you have millions of soldiers. All you need are two or three specialists to write the code and activate them. You can therefore imagine that in the near future, in various parts of the planet, people will be hunted. They will hide in shelters and devise techniques so they won’t be recognized. This is therefore the darkness that autonomous weapons systems bring; and this darkness is already spreading.

Stuart Russell,
professor of computer science at the University of California, Berkeley

Russell’s nightmare scenario concerns lethal autonomous weapons systems (LAWS). That is, weapons that have the ability to select and strike targets on their own; that is, machines that can make the decision to kill people; that is, killer robots. The world was caught off guard when the use of military drones began. What will happen now with LAWS?
Although the hypothetical example in the introduction refers to drones, the highlighted risk is not only about them. Today, the military use of drones is already common – at least ten states officially use them in military operations. Self-driving cars are also a reality. Twenty years ago, a computer defeated Kasparov at chess, and more recently another computer taught itself how to beat humans at Go, a Chinese strategic game not limited by patterns and probabilities. In July 2016, the Dallas police sent a robot loaded with explosives to eliminate an armed person who had shot at police officers.1

But with LAWS, in contrast to the Dallas robot, humans set the parameters of the attack without actually knowing the specific target. The weapon system activates, searches for anything that falls within its parameters, attacks and strikes. There are existing examples that may not yet cause shivers of horror due to indifference: entire fleets of warships in the South China Sea, modern military radars of most states, and a large number of artillery pieces in the European field; they exist, we just don’t understand what they mean. Now multiply all this, add the active military conflicts, add the billions invested primarily in related research programs, add the ambitions and fears of competing states, add the underground activity of state and non-state structures, and you can imagine alarming outcomes: all energy stations, all schools, all hospitals, all men of military age, all men carrying weapons, all people following a specific dress code… Let your imagination work.

While these sound like horror scenarios you pay to watch at the cinema, killer robots will soon be at our doorstep thanks to the US, Russia, and China leading the development race. “In reality, there is no technology missing,” Russell says. “All the necessary technological components are already available on the market. It’s simply a matter of the scale of funding that will be invested.”

The American X-47B during a test flight over the Atlantic.

LAWS are generally divided into three categories. Based on a classification proposed by Human Rights Watch and now generally accepted, in the first category the human is in the loop – the machine executes its mission under human control, reaches the target and waits for an order to strike. In the second, the human is on the loop – the machine executes the mission and strikes the target, but the human can intervene, cancel its programming and stop it. And finally in the third, the human is out of the loop – the machine executes its mission and that’s all; no control, no recall, no interruption of operation.

The discussion about autonomous weapon systems has nothing futuristic about it; LAWS are already here in many ways. Many states have defensive systems with autonomous functions that can select and hit targets without human intervention – theoretically recognizing aggressive actions and activating to neutralize them. In most cases, there is typically the possibility of human intervention, but these weapon systems are designed for situations where things are evolving rapidly, so it is practically impossible to block them with human initiative. The United States, for example, has the Patriot air defense system to protect against missiles, airplanes and drones, while the American navy has the equivalent Aegis.

Until now, all the countries that have proceeded openly with the use of LAWS support – implicitly – that they are doing it for defensive reasons; that the final decision for the use of lethal force has not been delegated to the machine algorithms; that they contribute to the rationalization of the use of violence in cases of “self-defense”; that they allow for the final withdrawal of prohibited weapons, such as mines; that they improve the response to asymmetrical threats, where the human factor lags in recognizing the risk in time. The argument about “defensive weapons” is, of course, endless; after all, nuclear weapons also served “defense.” Beyond the propagandistic rhetoric, the distinction between “defensive” and “offensive” systems that move entirely autonomously or after approval and control is clearly blurred. A characteristic example is the South Korean SGR-A1 (of Samsung – besides state of the art smart phones, it also produces smart meat grinders), an autonomous fixed robot, which has been installed along the demilitarized zone between north and south Korea and can kill anyone who attempts to move within it. The black metallic rotating box is armed with a machine gun and grenade launcher. South Korea supports that the robot sends a signal to an operator in order to be activated, therefore there is a human factor before every decision for the use of fire, but there are many testimonies that the robots have autonomous function. Which function is every time active? Nobody knows…

Meanwhile, pure offensive systems in default lethal mode already exist. Take for example the Israeli Harpy and the second-generation Harop, which enters the field, flies around until it detects a target and attacks as a kamikaze, turning into a bomb. The Harpy is fully autonomous; the Harop is of the category «human on the loop». In April 2016 it became known that an Israeli Harop hit a bus full of Armenians in the Nagorno-Karabakh region, where Armenia and Azerbaijan have been clashing militarily since the era of the independence of the two states. According to reports, Israel has sold the kamikaze drone to at least five countries. 2

The armies want LAWS for a number of reasons. To begin with, they cost less than training soldiers. Additionally, they exponentially increase attack capabilities and force projection. Without humans in the loop, autonomous robots can be sent on dangerous missions without even accounting for the cost in losses of their operators. Also, target selection through algorithmic processing will allow faster engagement even in areas where communication networks are destroyed and there is no possibility of receiving orders from the command center.  

Israel has openly declared its intention to move toward full robotic autonomy as soon as possible. China systematically reveals little in comparison to its intentions, plans, and achievements. In 2013, Russia created its own version of a combination of DARPA and the U.S. Navy’s research laboratory for autonomous systems, while Deputy Prime Minister Rogozin called on the Russian industry to build weapons that “will strike on their own.” Britain, India, and South Korea are also among the states that have independent programs for designing and developing autonomous robotic weapons.
The United States is certainly among the states, if not the first, that started the race to acquire self-activating robot killers; nevertheless, in international forums they pretend to set limits and barriers (so that murderous attacks are conducted under human control and not handed over to machine control; this is the “moral” caveat!), but it is a deceptive stance clearly aimed at preemptively overcoming any criticism and placing obstacles in the way of those who follow.

The Israeli Guardium. The soldier’s gesture probably belongs to some kind of human-machine communication code.

It is indicative that since 2001 the American Congress had legislated the army’s obligation to have converted one-third of its vehicles to autonomous, unmanned by 2015; a goal that was not achieved, but clearly shows the trend. In 2012, the Department of Defense issued Directive 3000.09 with the general specifications of autonomous weapons systems that the American military can use. According to this, American LAWS will be designed in a way that allows commanders to exercise “due human judgment over the use of lethal force” from LAWS. Many considered that with this provision, American weapons would be limited at least to the “human on the loop” category and would not proceed to full operational autonomy. But the directive nowhere explicitly states the possibility of human decision at any stage of LAWS action; on the contrary, the reasonable interpretation based on which the military organizes its tactics is that within military operations, command can decide to activate LAWS and it is their responsibility to oversee that they act according to specifications. For example, the commander of American forces in Iraq can order the use of killer robots against armed resistance in general or for guarding strategic land routes or for protecting American military installations and subsequently confirm that they indeed completed their mission successfully and within their orders. Beyond formal specifications and moral ambiguities about “human control,” the established tactic of the American military is anyway to characterize as “semi-autonomous” new weapons systems that are about to be put into operational readiness, in order to avoid unnecessary and indiscreet questions. Such an example is the LRASM missile (Long Range Anti-Ship Missile) which hunts ships within a large radius, distinguishes based on its own processes which are enemy and which are not, and ultimately decides where to strike; its characterization is “semi-autonomous,” so it does not fall into the category of “due human judgment.” In 2014, in a test, such a missile was launched from a B-1 bomber, until the middle of its mission it was under crew control, but subsequently it severed all communication with its operators through its own actions and entered fully autonomous operation. Similar are the British Brimstone missiles, which fall into the “fire and forget” methodology (you fire them and forget them). Operators only define a large area of operation and then the missile itself makes the distinction between military and civilian vehicles and the final choice of its target. Moreover, these missiles can communicate with each other in order to share targets and strike more effectively.

In November 2014, the U.S. Department of Defense announced the “third offset strategy.” The first, from the end of WWII until the Vietnam War, was based on nuclear weapons. The second, roughly from 1975 to 1989, prioritized technological superiority as a response to the numerical military advantage of the Warsaw Pact, especially in Europe. The third and latest also focuses on technology, but aims for the U.S. to be the first to make the leap to the new technological paradigm. The areas of focus, as defined, are characteristic: robotics, autonomous systems, big data, miniaturization technologies, advanced industrial methods that can respond directly to military needs, and deeper collaboration between the armed forces and leading private sector companies.

If robotic autonomous weapons systems are one leg of the paradigm shift in military affairs, the enhanced soldier / super-soldier is the other. At an event of the American think tank Atlantic Council in May ’16, American Deputy Secretary of Defense Robert Work explained that the US does not intend to develop the Terminator. «I think of it more in terms of Iron Man – the ability of a machine to assist humans, while humans still retain control, but the machine makes them much more powerful and capable». This model in military terminology is called «centaur fighting» (in Greek it could be rendered as «combat capability of the centaur type») or human-machine integration. Here LAWS play the role of the last resort, the ultimate solution; in order for it not to be implemented, the emphasis must fall on the digital upgrade and mechanization of soldiers. The goal, according to Work, is through artificial intelligence and robotics to upgrade rather than replace soldiers. Nevertheless, he continued, he is concerned that adversaries may develop fully autonomous weapons systems and in that case the US will be forced «to delegate authority to machines» because «humans simply cannot act equally fast». The problem for the US, according to Work, is that they do not have a monopoly on the key factor of developments, information technology, which today is driven more by market companies than military requirements. And he continued by comparing the current era to the interwar period and calling on the US to emulate the German invention of blitzkrieg; the only thing he seemed to forget was the结局 that the Nazi «lightning war» had. 3
With the blessings and capital of the Department of Defense, the American army has embarked on a regular testing race. Among the most advanced projects is the X-47B by Northrop Grumman, a permanent supplier to the American army. The X-47B is the first autonomous combat aircraft to operate from an aircraft carrier. The test model appears to have come straight out of a science fiction movie: the curved, gray aircraft takes off from an aircraft carrier, executes its predetermined mission, and returns. Last year, the X-47B was refueled in flight without any human intervention; theoretically this means that, except for maintenance cases, it could continuously execute missions without ever landing. This was achieved by the X-37B, Boeing’s experimental model, which began in 1999 as a NASA program, but was transferred to the Department of Defense in 2004 and is used for testing by the Air Force (although the secrecy surrounding it is such that it is practically unknown whether it is actually testing or regular operational activity). The unmanned aircraft is carried to the stratosphere by another aircraft and then flies on its own in orbit. In its last completed mission, it remained in continuous flight for two years, from May ’15 to May ’17. On September 7, ’17 it launched again and continues to fly above our heads.

The South Korean SGR-A1.

Among the new technologies, one that has caused great excitement is swarms—weapon systems that move in large formations with a handler somewhere far ahead at a computer. Imagine hundreds of small drones all moving together, like a deadly flock of birds that would make the nightmare conjured up by Hitchcock seem like child’s play. The weapons communicate with each other in order to accomplish a mission through a method called collaborative autonomy. This is already happening. Three years ago, a small fleet of autonomous warships sailed a trial run down the James River. In July 2016, the U.S. Naval Research Office tested 30 drones in formation, which launched from a ship at sea, managed to break their formation, carry out their mission, and then reassemble.

Until recently, DARPA (Defense Advanced Research Projects Agency), the mother agency of the internet, was at the forefront of military technology and innovation, but today private companies are the ones setting the pace. In July 2016, then U.S. Secretary of Defense Ash Carter inaugurated in Boston the “Defense Innovation Unit” (DIUx), designed to attract technology companies to collaborate with the Department of Defense. A branch of the unit has already been established in Silicon Valley. For 2017 alone, the DIUx budget was $72 billion. Among the program’s initial goals was the complete design of a drone capable of flying indoors, operating autonomously, and moving in the field without GPS assistance. Additionally, the development of machine learning technology so that machines can scan millions of social media posts, searching for specific photographs and collecting suspicious posts, in order to provide “early warning of extremist activity online.”

Not to be left behind, Britain is developing Taranis, a supersonic stealth drone similar to the X-47B, with the difference that this one will operate from the ground. China, although it covers its achievements in military technology with a veil of secrecy, has recently tested the Sharp Sword, its own version of a supersonic stealth drone for military use, and there are rumors about a similar combat aircraft, the Anjian (dark sword), but little has been revealed about its development. The Chinese are also testing a pixel technology for tank camouflage, which looks like it has come straight out of Minecraft. Russia, for its part, has the T-14 Armata tank, which features a remotely operated machine gun turret that rotates 360 degrees. For now, the tank requires a crew of three people, but the manufacturer has announced that it will eliminate the need for human operation within the next few years. Russian designs also include a fully robotized unit with artificial intelligence that is expected to be operationally ready within the next decade. Israel is already moving in this direction, with the development of the Guardium, which patrols the borders. “The Guardium is based on a unique algorithmic specialized system that functions as a brain and provides the robot with the ability to make decisions,” declares the Israeli aerospace service on its website.
The combination of all these developments and their counterparts that we ignore will inevitably lead to a new arms race, similar to the one that led to the proliferation of nuclear and conventional intercontinental weapons, since each innovation acts as a means of pressure on competitors. The issue is not only what each state achieves in the field of military technology, but also what its rivals believe it has achieved and what they plan to do in response.

The Russian robot Platforma-M which was first demonstrated in Sevastopol, Crimea, in 2015.

The development of robotic weapon systems with autonomous decision-making and design capabilities is certain not to be limited exclusively to the field of military operations. Such results are promised that cannot be ignored by law enforcement and control services. Moreover, the army is already being tested in many states as a tool for internal control—and if called upon for such a mission, it won’t be with batons and tear gas—but also police forces are increasingly being militarized systematically. Thus, in August 2015, the state of South Dakota passed a law allowing police to arm their drones with tasers and rubber bullets. In Texas, a company began promoting Cupid (a name borrowed from the winged god of love with the bow), a fully autonomous quadcopter that operates within the framework of the state’s Stand Your Ground law; if someone illegally enters your property, Cupid will ask them to leave, and if they refuse, it will strike them with a taser and continue striking them until the police arrive.

The fact that there is no force imposing limits on military technology—the anti-nuclear movement is nothing but “ancient history” to put the militarists to sleep—makes the already fragile boundary between police and military use of new weapons systems even more permeable. Tear gas is supposedly banned in warfare based on a series of international conventions; this did not prevent any police force from choking chemical agents even into peaceful protest gatherings, whenever law enforcement deems it expedient. All the more so now that robotic tools promise enormous effectiveness at relatively low cost. Emblematic is the example of the South African Skunk Riot Control Copter. The drone with its eight rotors can fire 20 rounds per second from four barrels, “stopping any crowd in its tracks,” according to the manufacturer’s website. The Skunk can be armed with pepper spray, colored balls, or plastic bullets, and can carry up to 4,000 rounds and lasers that cause temporary blindness. It also features cameras and a loudspeaker. Demand is so high that the company that makes it has set up two additional factories to meet orders from various security services around the world.

If this is the situation with the informatization/robotization of war (and we have focused only on LAWS without touching other crucial dimensions, such as cyberwarfare, for example), what are the responses of its opponents? From the anti-war movement’s side, none! The first invasion of Iraq, the “war on terror”, the war in Afghanistan, the second invasion of Iraq, proxy campaigns in Africa, open interventions in the Middle East, found the metropolitan societies lined up behind the war propaganda of their states; in the rest of the planet, opposition to war is paid with blood… Today, those who have thousands of reasons to oppose the unleashing of technology’s bestial capabilities for the needs of war remain silent and inactive under the weight of the crisis, but also the incorporation of the Spectacle.

The void of an authentic and contemporary anti-war response, as happens in these cases, has turned to understand it the international “society of citizens,” through non-governmental organizations. In 2012, through the initiative of Human Rights Watch, approximately 60 NGOs formed the Campaign to Stop Killer Robots. Beyond actions of disclosure through the media and documentation with the support of academics, the main field of action of the Campaign is the Convention on Certain Conventional Weapons of the UN (translated into Greek as “Conference for the Prohibition or Restriction of the Use of Certain Conventional Weapons”), a body consisting of 125 states, which takes decisions with unanimity and the only worthy decision to show in its sixty-year life is the prohibition of the use of drugs against people, which is of course systematically violated without consequences. The dialogue conducted through the initiative of the Campaign (in which just 12 states have declared provisionally that they agree on a prohibition of LAWS – among these the Costa Rica, the Ecuador, the Palestine, the Bolivia, the Ghana and the Vatican, states so difficult to acquire robotic weapons anyway) seems worse than the dialogue of the medieval theologians about the sex of the angels. How are the offensive from the defensive LAWS distinguished? The starting point is anyway filthy: the discussion about the prohibition concerns only the “offensive” systems, leaving aside the “defensive” ones; thus the Israeli Harpy and Guardium have been characterized as defensive, therefore permitted. How is defined the autonomous operation of a killer robot? A drone that flies over a highway in Afghanistan, identifies a convoy and hits a suspect that is included in the database of, if there is a controller several thousand kilometers away that is excluded to act in a fraction of time and to stop it? What constitutes essential human control? Decision at every stage of operation of the robot, general commands, instructions from other computers that however are subject to human control? A bureaucratic carousel, victim of diplomacy and of competitions, that is impossible to stop any evolution. The era of killer robots has already dawned and we are condemned to live it in its entirety…

Harry Tuttle
cyborg #11 – 02/2018

An LRASM missile is launched from a B-1 for target search.

“The invasion of killer robots” was based on the articles:
Killer Robots Are Coming And These People Are Trying To Stop Them
Why Should We Ban Autonomous Weapons? To Survive
AI Could Revolutionize War as Much as Nukes
– In the video documentary The Dawn of Killer Robots
– And in the report Mapping the Development of Autonomy in Weapon Systems of the Stockholm International Peace Research Institute

  1. On July 8, 2016, in Dallas, during a peaceful protest demonstration against police violence, following repeated murders of African Americans by police officers, 25-year-old Afghanistan veteran Micah Xavier Johnson fortified himself at the highest point of a hill and began shooting at police officers, injuring three and killing five. After a chase, Johnson was cornered on the second floor of a garage. Following lengthy negotiations, police sent against him a bomb disposal robot, modified to carry an explosive device. When it approached Johnson, the device was activated, killing him. Although it was a simple robotic mechanism, remotely operated, the Dallas case is considered the “first official” use of a killer robot by police force. ↩︎
  2. Read the Washington Post report in the appendix “Israeli kamikaze-drone spotted in the clashes in Nagorno-Karabakh”. ↩︎
  3. For an indicative recording of the developments regarding the enhanced soldier, read the article by Wired in the appendix “The soldier of the future will be enhanced and invulnerable”. ↩︎