Automoderated users, Autopatrolled users, Bureaucrats, Comment administrators, Confirmed users, Moderators, Rollbackers, Administrators
213,981
edits
Looney Toons (talk | contribs) (replaced redirect to trope with direct link) |
No edit summary |
||
Line 155:
** Of course, all this works properly [http://freefall.purrsia.com/ff2200/fc02200.htm only as long as other security measures stop tampering with software], and physical access to hardware is the point where security measures traditionally split into "minor delay" and "[http://freefall.purrsia.com/ff2300/fc02235.htm fool's errand]" categories.
** Determination of "human" [http://freefall.purrsia.com/ff2600/fc02547.htm had to] err on the safe side, and combined with learning (let alone organic brain based) AI this leads to giving the term some good stretch. Especially since the developer encouraged this outcome.
** The First Law being a free will override, robots are not inclined to stop and think what they are doing. Which is why Florence learned to avoid anything that may trip "hurr, {{
*** Also, if [[Morton's Fork|either choice]] may harm humans, robots are going to throw themselves whichever way, [http://freefall.purrsia.com/ff2300/fc02222.htm then other robots must try to stop them] from taking dangerous actions… and so on, [[Dwarf Fortress|"loyalty cascade"]] style. And then confused humans are likely to attempt solving the immediate problems by giving uncoordinated orders, which would only multiply the chaos.
* ''[[21st Century Fox (webcomic)|21st Century Fox]]'' has all robots with the Three Laws (though since [[Funny Animal|no one's human]] I expect the first law is slightly different). Unfortunately saying the phrase "[[Bill Clinton|define the word 'is']]", or "[[Richard Nixon|I am not a crook]]" locks AIs out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
|