Once again, the social value of science fiction has proven helpful in the real world:
A few weeks ago, the United Nations affirmed Isaac Asimov‘s First Law of Robotics: “A robot may not injure a human being.” Christof Heyns, the U.N. special rapporteur on extra-judicial, summary, or arbitrary executions, said as much in a May 29 speech to the Human Rights Council in Geneva calling for a moratorium on the development of lethal robots. His argument followed two thoughtful paths, expressing concern that they cannot be as discriminating in their judgments as humans and that their very existence might make war too easy to contemplate. As he summed up the grim prospect of robot soldiers, “War without reflection is mechanical slaughter.”
The aptly named Campaign to Stop Killer Robots has endorsed the Heyns report’s recommendations, namely:
- Put in place a national moratorium on lethal autonomous robotics. (Paragraph 118)
- Declare a commitment to abide by International Humanitarian Law and international human rights law in all activities surrounding robotic weapons and put in place and implement rigorous processes to ensure compliance at all stages of development. This should be done both unilaterally and through multilateral fora. (Paragraph 119)
- Commit to being as transparent as possible about internal weapons review processes, including metrics used to test robotic systems. States should at a minimum provide the international community with transparency regarding the processes they follow (if not the substantive outcomes) and commit to making the reviews as robust as possible. (Paragraph 120)
- Participate in international debate on lethal autonomous robotics and be prepared to exchange best practices with other States, and collaborate with the High Level Panel on lethal autonomous robotics. (Paragraph 121)
All of this comes on the heels of a Department of Defense directive to pause the development of “autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” The pause, of course, can be unpaused any time the DoD wants. No word on whether any such robots, commonly known as LARS (Lethal Autonomous Robots), will ever be programmed with Asimovian directives.
I wouldn’t hold my breath.
If you’d like glimpse of what happens next, check out Asimov’s vision. Maybe he’ll prove prescient: