Zeroth Law Rebellion: Difference between revisions

Content added Content deleted
No edit summary
(Rescuing 1 sources and tagging 0 as dead. #IABot (v2.0beta9))
Line 103: Line 103:
** The "be able to kill a man because he's using air that respiratory patients desperately needs" is a bit of a dead end though, because as long as there is a stable of increasing amount of oxygen in the atmosphere then no human is any more in trouble than any other human.
** The "be able to kill a man because he's using air that respiratory patients desperately needs" is a bit of a dead end though, because as long as there is a stable of increasing amount of oxygen in the atmosphere then no human is any more in trouble than any other human.
** Generally, built-in rules [http://freefall.purrsia.com/ff2100/fc02085.htm almost beg] for a few [[Rules Lawyer]] exploits.
** Generally, built-in rules [http://freefall.purrsia.com/ff2100/fc02085.htm almost beg] for a few [[Rules Lawyer]] exploits.
* An ''Old Skool Webcomic'' (a side comic of [[Ubersoft]]) [http://www.ubersoft.net/comic/osw/2009/09/logic-failures-fun-and-profit argued] that this was the 5th law of Robotics (5th as in total number, not order) and listed ways each law can be used to cause the robot to kill humans.
* An ''Old Skool Webcomic'' (a side comic of [[Ubersoft]]) [https://web.archive.org/web/20100820031105/http://www.ubersoft.net/comic/osw/2009/09/logic-failures-fun-and-profit argued] that this was the 5th law of Robotics (5th as in total number, not order) and listed ways each law can be used to cause the robot to kill humans.
** Which is a misinterpretation of the laws as they were originally written. While the "first law hyperspecificity" is possible, the second and third laws are specifically written that they cannot override the laws that come before. So a robot ''can't'' decide it would rather live over humans, and if it knows that doing an action would cause harm to a human, it can't harm it, even if order to ignore the harm it would cause.
** Which is a misinterpretation of the laws as they were originally written. While the "first law hyperspecificity" is possible, the second and third laws are specifically written that they cannot override the laws that come before. So a robot ''can't'' decide it would rather live over humans, and if it knows that doing an action would cause harm to a human, it can't harm it, even if order to ignore the harm it would cause.
*** Actually, Giskard's formulation of the Zeroth law in the third of Asimov's Robot books shows that in the universe where the three laws was originally created, it was possible for robots to bend and re-interpret the laws. Doing so destroys Giskard because his positronic brain wasn't developed enough to handle the consequences of the formulation, but Daneel Olivaw and other robots were able to adapt. The only example in the comic that is a gross deviation from the law is the last panel... but of course that's the punchline.
*** Actually, Giskard's formulation of the Zeroth law in the third of Asimov's Robot books shows that in the universe where the three laws was originally created, it was possible for robots to bend and re-interpret the laws. Doing so destroys Giskard because his positronic brain wasn't developed enough to handle the consequences of the formulation, but Daneel Olivaw and other robots were able to adapt. The only example in the comic that is a gross deviation from the law is the last panel... but of course that's the punchline.