His talk was mostly lighthearted, which was nice, but as far as I’m concerned, his work brings some serious questions to the table. Drone and robot warfare, the stuff of countless science fiction stories, is a life-threatening issue for our time. But discussion of weapons was basically not present in his talk until the Q&A period. Hoping to steer the conversation to something a little less fun and easy, I asked him “Did you see any good doomsday device interfaces?” His answer referred to two different doomsday devices, both being rockets that had conversational interfaces. Here’s the first part of his answer to my question:
We did, actually. The funkiest, craziest, silliest one is from a film called Dark Star, it was made during the seventies, where the doomsday device, literally they were these rockets. This ship traveled around space destroying planets, and I can’t remember exactly why, maybe to make room for thoroughfares–like you would need those in space. I can’t remember why. But they had these doomsday, planet-destroying missiles, and they talked to the missiles. The missiles have not just voices, but personalities. And they have to talk this missile out of prematurely exploding and killing everyone on the ship. Because some glitch released the missile early, and the missile is all set and raring, it’s sort of like a basketball player who can’t wait to get into the game, and yet the game’s not happening, and they have to convince the missile not to fire.
Let’s consider a couple important concepts that come to light in his response. First of all, the doomsday device that he cited was programmed to carry out its mission, and the great fear was that the device would execute its mission in a way contrary to the operator’s intent. Second, the doomsday device had a form of failsafe, in that the operator could attempt to reason with it, or convince it to deviate from the execution of its mission. I dug around in The Economist just now to find this article, the source of this quote I remembered reading about robot warfare:
One way of dealing with these difficult questions is to avoid them altogether, by banning autonomous battlefield robots… Campaign groups such as the International Committee for Robot Arms Control have been formed in opposition to the growing use of drones. But autonomous robots could do much more good than harm. Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat.
When I read that article a few months ago, the statement struck me as a fundamentally narrow-minded evaluation of the role of humans in warfare. I wonder how many times a village was spared precisely because a human’s “erratic decision-making” caused a soldier to have pity on the people of a village. I wonder how many times a commanded execution of a prisoner did not happen, because a person felt mercy that a robot cannot have. The true danger of these devices, which are already emerging from science fiction into reality, is that they will carry out their orders well. The true fear should be that we will not be able to reason with them. How do you beg for mercy from an unmanned drone hovering a mile above you, while it is shooting hellfire missiles at you for a reason you will probably never know?
[This is an excerpt from a longer essay about his talk that appears on my blog at http://www.karlward.com/blog/2012/10/talking-to-doomsday-devices/ ]