November 21, 2012
Harvard Law School’s International Human Rights Clinic and the independent human rights organization Human Rights Watch have authored a report titled “Losing Humanity: The Case Against Killer Robots.” The report, released Nov. 19, argues that governments should pre-emptively ban fully autonomous weapons because of the danger they pose to civilians in armed conflict. These future weapons, sometimes called “killer robots,” would be able to choose and fire on targets without human intervention.
The 50-page report also outlines concerns about these fully autonomous weapons, which would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians. In addition, the obstacles to holding anyone accountable for harm caused by the weapons would weaken the law’s power to deter future violations.
“Losing Humanity” is the first major publication about fully autonomous weapons by a nongovernmental organization and is based on extensive research into the law, technology, and ethics of these proposed weapons. It is jointly published by Human Rights Watch and the Harvard Law School International Human Rights Clinic.
Human Rights Watch and the International Human Rights Clinic called for an international treaty that would absolutely prohibit the development, production, and use of fully autonomous weapons. They also called on individual nations to pass laws and adopt policies as important measures to prevent development, production, and use of such weapons at the domestic level.
“It’s critical to take action now,” said Bonnie Docherty, senior clinical instructor at the International Human Rights Clinic and senior researcher at Human Rights Watch. “The technology is alluring, and the more nations invest in it, the harder it will be to convince them to give it up.”
Fully autonomous weapons do not yet exist, and major powers, including the United States, have not made a decision to deploy them. But high-tech militaries are developing or have already deployed precursors that illustrate the push toward greater autonomy for machines on the battlefield. The United States is a leader in this technological development. Several other countries—including China, Germany, Israel, South Korea, Russia, and the United Kingdom—have also been involved. Many experts predict that full autonomy for weapons could be achieved in 20 to 30 years, and some think even sooner.
“If this technological development continues, science fiction images of war are likely to become more science than fiction,” said Docherty, lead author of the report.
Fully autonomous weapons could not meet the requirements of international humanitarian law, Human Rights Watch and the Harvard clinic said. They would be unable to distinguish adequately between soldiers and civilians on the battlefield or apply the human judgment necessary to evaluate the proportionality of an attack—whether civilian harm outweighs military advantage.
According to the report, these robots would also undermine non-legal checks on the killing of civilians. Fully autonomous weapons could not show human compassion for their victims, and autocrats could abuse them by directing them against their own people. While replacing human troops with machines could save military lives, it could also make going to war easier, which would shift the burden of armed conflict onto civilians.
Finally, the use of fully autonomous weapons would create an accountability gap. Trying to hold the commander, programmer, or manufacturer legally responsible for a robot’s actions presents significant challenges. The lack of accountability would undercut the ability to deter violations of international law and to provide victims meaningful retributive justice.
While most militaries maintain that for the immediate future humans will retain some oversight over the actions of weaponized robots, the effectiveness of that oversight is questionable, Human Rights Watch and the Harvard clinic said. Moreover, military statements have left the door open to full autonomy in the future.