Friday, February 4, 2011

Asimov in Popular Culture: The Laws of Domnance

The Economic TImes, 28 Jan, 2011: The laws of dominance
The Three Laws of Robotics formulated by science fiction author Isaac Asimov have served as a moral basis for not only his own stories but for others, too. They are: 1. A robot may not harm a human being or, through inaction, allow a human being to come to harm; 2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; 3.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Later, Asimov incorporated a fourth and more fundamental Zeroth Law preceding the others: A robot may not harm humanity, or, through inaction, allow humanity to come to harm.

The laws appear simple and straightforward because they embrace the essential guiding principles of a good many of the world's actual ethical systems. They're also crafted in a manner that ensures the continued moral authority of humans - the robot's creators as it were - and to preclude the use of such machines for evil.

However, even a casual reading shows that these laws can be applied in the same way and just as well to a human being in reference to his maker. For instance, the second law could easily read: "A human must obey any orders given to him by his Creator except, etc., etc.," It's standard scriptural stuff which religions dictate and we take as their prerogative.

But, the first law is problematic for some people because that would now read: "A human being may not harm his Creator or, through inaction, allow his Creator to come to harm." If we think of "harm" as not meaning just gross physical or mental injury but causing an erosion of the Creator's dominant status in any way, then this is exactly what non-believers do - even when they aren't outright atheists but merely agnostic.

The time is coming when robots will no longer be the mindless creations they were generally thought to be when the laws were first formulated but will develop into autonomous entities with intelligence and, possibly, consciousness. When they can, for example, take an independent look at the first law and rephrase it as: "A robot may not harm a human being... unless it is for the human being's own good."

As in force-feeding a hunger striker. It makes more sense. Only when we become creators of sentient beings ourselves, can we realise how hard it is to make laws that are followed so that we can continue to wield authority.

* Seaborn: Oceanography Blog
* Star Trek Report: Space Sciences
* Volcano Seven: Treasure and Treasure Hunters
* Rush Limbaugh Report

No comments:

Post a Comment