Overcoming the Threat of Autonomous Weapons and Combat Robots
Autonomous weapons, broadly defined as weapons and robots able to select targets and engage in combat without human input, are being developed by the world’s foremost military powers including the United States, the United Kingdom, Israel, South Korea, China, and Russia.
Experts have predicted that “technology allowing a pre-programmed robot to shoot to kill, or a tank to fire at a target with no human involvement, is only years away.” Bonnie Docherty, who is a senior arms division researcher at Human Rights Watch, said, “Machines have long served as instruments of war, but historically humans have directed how they are used. Now there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.”
These combat robots promise great military and security advantages. However they are operationalized, they will likely create battlefield conditions in the future in which far fewer soldiers will need to be deployed in life-threatening situations. They can be designed to act and react more quickly than humans, and to enhance the efficiency and safety with which targets are identified, acquired, prioritized, and eliminated. They would not need to be fed, they would not disobey orders or suffer from post-traumatic stress, and they could be mass-produced.
However, they also pose a risk to human rights and international humanitarian law. Autonomous weapons have not yet proven that they possess the ability to distinguish between a combatant, a noncombatant, and a surrendering combatant, and this puts the legal principles of civilian protection and prevention of unnecessary harm to combatants in great jeopardy. Nor have autonomous weapons demonstrated that they can successfully determine how proportionate their application of force should be depending on the situation.
Among other concerns, this has led Human Rights Watch, the Future of Life Institute, the International Committee for Robot Arms Control, the Women’s International League for Peace and Freedom, the International Committee of the Red Cross (ICRC), and the Campaign to Stop Killer Robots to adopt positions calling for various prohibitions and restrictions on the development and usage of autonomous weapons.
These efforts to establish arms control for autonomous weapons have, up to now, made little progress—in that no agreement, preemptive ban, or regulations have been implemented, and military powers continue to develop autonomous weapons unabatedly.
If arms control for autonomous weapons is to be effectively established, activists must be realistic about what they can accomplish. Past efforts to control anti-personnel landmines and cluster munitions have demonstrated how difficult it is to prohibit a weapon which provides great military advantage.
No ban on autonomous weapons can be implemented. States will continue to pursue autonomous weapons and robots and they will do so fiercely. Instead, the best hope to control autonomous weapons is through the pursuit of international regulations. Specifically, the United Nations Convention on Certain Conventional Weapons (CCW) must be called upon to adopt a new protocol.
For this ‘Protocol VI on Autonomous Weapons and Robots’ to succeed, it must—instead of trying to entirely prohibit autonomous weapons, which would never gain international support—regulate the design and purpose of autonomous weapons; accommodate for existing security threats, commitments, and geopolitics; and determine the environment and context in which autonomous weapons are used.
Autonomous weapons designed for defensive and non-lethal purposes should be explicitly supported. This would include autonomous weapons designed to defend military bases, submarines, naval vessels, and soldiers on the battlefield; and those designed for the selecting and targeting of material structures and munitions. This gesture will enable states to utilize autonomous weapons for legitimate military security, but reduce the possibility of indiscriminate violence.
Regulations should also consider the nature of a weapon’s autonomy and how that should correspond to the lethality it is designed to inflict. As an autonomous weapon’s autonomy increases, so should the safeguards protecting against accidents and indiscriminate violence. If regulations stipulated that all weapons and robots be designed with emergency safety/termination protocols and human monitoring in place, and that ‘fully’ autonomous weapons not be used to apply lethal force intentionally, the same offensive military objectives could be achieved but the possibility that autonomous weapons will begin unstoppably and indiscriminately targeting noncombatants would be greatly reduced. It is likely that this measure would gain support as the United States and the United Kingdom have already indicated that they are uninterested in developing uncontrollable autonomous weapons for lethal purposes.
Regarding environmental and contextual usage, John Lewis has suggested that while autonomous weapons are still in their infant stages, they should be operating only in isolated environments and in a defensive context. Michael Schmitt has similarly argued for the possibility of autonomous weapons being deployed only in remote areas, or attached to naval vessels.
Kenneth Anderson and Matthew Waxman suggested that they should only be used in operational environments where there are few to no civilians, such as from a submarine. Jean-Baptiste Jeangéne Vilmer distinguished between urban environments and underwater, marine, aerial, or spatial environments, where the risk of indiscriminate noncombatant violence is far lower.
Gary Marchant and others speculated that autonomous weapons should be used only on specified missions, such as in long-range aerial bombardments. Voicing a similar sentiment, Peter Potsma advocated that they be permitted in high-intensity conflicts, but prohibited in counterinsurgency operations.
If autonomous weapons are to be used in urban centers—not unthinkable given the hideouts of many terrorists—then regulations can be enacted which stipulate that the autonomous weapons or robots are to be accompanied by a squad of soldiers, or only be used in a defensive or non-lethal combat capacity.
The possibilities are numerous and have great potential. A dialogue between states about the ethical and practical usage of autonomous weapons must be encouraged rather than a futile attempt to impede autonomous technology.
Although the arms control of autonomous weapons is in gridlock, the fact that these weapons have not yet been deployed means that they have not yet killed. They have not yet been used in inhumane ways that will be regretted and condemned later. There is still an opportunity to control autonomous weapons effectively and perhaps save many lives in the process. The time to act is now.