When Asimov’s Robots Encounter the Laws of War
by Michael H. Hoffman
I t’s been culturally ingrained since 1942 that robots should never harm human beings. Isaac Asimov first introduced his famed laws of robotics in a science fiction story published that year. They stand the test of time as an influence on popular thinking. The modern, “transhuman” movement is pressing for artificial enhancement of natural human abilities. The view that robots should do no harm is now complemented by an emerging view that engineering of human beings should do no harm either.
In consequence, military legal and ethical standards are undergoing healthy scrutiny to determine if they are sufficient to address emerging artificial intelligence (AI) capabilities, and whether these will be complemented by a system sufficient to maintain command and control over them. Not yet getting as much attention are ethical implications should AI and transhuman warfighters gain some measure of unplanned for autonomy. Beyond that, other challenges calling for attention are ethical implications should feedback from AI and transhuman warfighters adversely influence military decision making. The ethical implications of decisions and actions taken by autonomous AI and transhuman military actors, and their potential influence on military decision making, is the focus of this paper.
The cultural foundation for modern exploration of ethics and robotics first appeared in 1942 in Asimov’s story “Runaround” which ultimately found its way into his famed novel I Robot…
Read the full article
Download the complete edition
| About the Author:
Michael H. Hoffman is an associate professor with the U.S. Army Command and General Staff College. He holds a J.D. from Southern Methodist University School of Law. His current research focuses on the legal and ethical implications of advanced military and space technologies.