Pages

Wednesday, July 4, 2012

Robot Wars

From the Irish Time:  Robot wars

FANS OF science fiction, and notably of such writers as Isaac Asimov, will long be familiar with the looming ethical challenges posed by the development of “intelligent” machines capable of directing themselves. Is there a need to set limits to autonomous action, to hardwire into robots moral constraints akin to those supposedly guiding human actions? Asimov’s response was his “Three Laws of Robotics”, the first of which was that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
All good fun, and the stuff of fantasy. Well, not any more. Advances in battlefield technology mean that a range of autonomous, “thinking” killing machines are soon likely to be available to commanders. Indeed some, of a cruder variety, have already deployed. Now the question is, should they be banned? Should a human intervention and responsibility be a requirement in a decision to kill, and be enshrined in the rules of war and humanitarian law? Wendell Wallach, a scholar and consultant at Yales Interdisciplinary Centre for Bioethics (co-author of Moral Machines: Teaching Right From Wrong) says yes.

Part of the problem is where to draw the line. Defensive weapons like patriot and cruise missiles can already be set to fire automatically when they spot an incoming missile. Landmines, likewise, to detonate. Or there’s Samsung Techwin’s remote-operated sentry “bot” that works in tandem with cameras and radar systems in the Korean Demilitarised Zone. Currently the robots cannot automatically fire on targets, requiring human permission to attack, but a simple change could override all that.

The US airforce is adapting some of its systems so that human intervention would only occur to stop inappropriate action by an automated weapon, rather than to specifically sanction a killing. Is that crossing the ethical boundary? And there are concerns that, quite apart from the ethical issues, such weapons may change the dynamic of war, making escalation into outright conflict easier. Do they make “friendly fire” or non-combatant casualties more likely?

The theoretical advantage, of course, for the deployer is that war could be fought “cleanly” with minimal human casualties – and hence political fallout – on its side. Perhaps, however, the issue should be added to the international arms control agenda, cumbersome and slow-moving as it may be. The Asimov convention?

 

No comments:

Post a Comment