"Three Laws"-Compliant

Before around 1940, almost every Speculative Fiction story involving robots followed the Frankenstein model, i.e., Crush! Kill! Destroy!. Fed up with this, a young Isaac Asimov decided to write stories about sympathetic robots, with programmed safeguards that prevented them from going on Robot Rampages. A conversation with Editor of Editors John W. Campbell helped him to boil those safeguards into The Three Laws of Robotics:

"1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws."

According to Asimov's account, Campbell composed the Three Laws; according to Campbell's account, he was simply distilling concepts that were presented in Asimov's stories.

The laws cover most obvious situations, but they are far from faultless. Asimov got a great deal of mileage writing a huge body of stories about how the laws would conflict with one another and generate unpredictable behavior. A recurring character in many of these was the "robopsychologist" Susan Calvin (who was, not entirely coincidentally, a brilliant logician who hated people).

It is worth noting Asimov didn't object exclusively to "the robot as menace stories" (as he called them) but also the "the robot as pathos" stories (ditto). He thought that robots attaining and growing to self awareness and full independence were no more interesting than robots going berserk and turning against their masters. While he did, over the course of his massive career, write a handful of both types of stories (still using the three laws), most of his robot stories dealt with robots as tools, because it made more sense. Almost all the stories surrounding Susan Calvin and her precursors are really about malfunctioning robots, and the mystery of investigating their behavior to discover the underlying conflicts.

Alas, as so often happens, Asimov's attempt to avert one overused trope gave birth to another that has been equally overused. Many writers (and readers) in the following decades would treat the Laws of Robotics as if they were as immutable as Newton's Laws of Motion, the Theory of Relativity, the Laws of Gravity... wait ... you know, they treated these laws better than they treated most real scientific principles.

Of course, even these near-immutable laws were played with and modified. Asimov eventually took one of the common workarounds and formalized it as a Zeroth Law, which stated that the well-being of humanity as a whole could take precedence over the health of an individual human. Stories by other authors occasionally proposed additional extensions, including a -1st law (sentience as a whole trumps humanity), 4th (robots must identify themselves as robots), a different 4th (robots are free to pursue other interests when not acting on the 1st-3rd laws) and 5th (robots must know they are robots), but unlike Asimov's own laws these are seldom referenced outside the originating work.

The main problem, of course, is that it is perfectly reasonable to expect a human to create a robot that does not obey the Three Laws, or even have them as part of their programming. An obvious example of this would be creating a Killer Robot for a purpose like fighting a war. For such a kind of robot, the Three Laws would be a hindrance to its intended function. (Asimov did, however, suggest a workaround for this: an autonomous spaceship programmed with the Three Laws could easily blow up spaceships full of people because, being itself an unmanned spaceship, would assume that any other spaceships were unmanned as well.)

Another little problem is exactly how the AI determines what is and isn't "human".

Also see Second Law, My Ass.

Anime and Manga

 * Time of Eve and Aquatic Language both feature the Three Laws and the robots who bend them a little.
 * Robots in GaoGaiGar are all Three Laws Compliant, at one point in GaoGaiGar Final the Carpenters (a swarm of construction robots) disassemble an incoming missile barrage, but this is given as the reason they cannot do the same to the manned assault craft, as disassembling them would leave their crews unprotected in space.
 * It is possible that Goldymarg could be capable of killing people, since his AI is simply a ROM dump of Geki Hyuma's psyche, but since they only fight aliens & other robots this theory is never tested.
 * Averted in the Chobits manga when Hideki asks the wife of the man who created Persocoms why he didn't just call them "Robots." Her reply was that he didn't want them to be associated with, and thus bound by, the Three Laws.
 * Astro Boy, although Osamu Tezuka probably developed his rules independently from Asimov. In Pluto, the number of robots able to override the laws can be counted on one hand..
 * Tezuka reportedly disliked Asimov's laws because of the implication that a sentient, artificially intelligent robot couldn't be considered a person (an issue that Asimov didn't directly address until "The Bicentennial Man"), and devised his own Laws Of Robotics. Just one of the things that the 2009 CGI movie missed.
 * Ghost in The Shell: Innocence mentions Moral Code #3: Maintain existence without inflicting injury on humans. But gynoids are subverting the law by creating deliberate malfunctions in their own software.
 * In one short arc of Ah! My Goddess, one of Keiichi's instructors attempts to dismantle Banpei and Sigel for research purposes (Skuld had made them capable of walking on two legs, which he had not been able to do with his own designs). Once they escape his trap, the professor asks if they know about the Three Laws of Robotics. They don't. He doesn't die, but they do rough him up and strap him to a table in a way that makes it look like he'd been decapitated and his head stuck on one of his own robots.
 * The Humongous Mecha of Kurogane no Linebarrel are normally this, aside from a slight difference in priorities between the first and second laws. In fact, this is the justification for them having pilots, despite being equipped with relatively complex AI. The Laws are hard-coded into them and thus they are only militarily useful when they have a Human with them to pull the trigger. Their aversion to killing is so great, in fact, that if one accidentally kills somebody (as things whose very footsteps can make buildings collapse are wont to do) they're compelled to use their advanced technology to bring them back to life.
 * Invoked in Episode 3 of Maji de Watashi ni Koi Shinasai!, where Miyako orders Cookie to taze Yamato into submission, while Yamato orders the robot to get Miyako off him. Cookie considers this dilemma out loud, where he has to obey a human command, yet isn't allowed to seriously hurt a human, yet also cannot allow a human to come to harm through his inaction.

Comic Books
"Robot Judge: In that case, what's about to happen will come as something of a shock to you. (Blasts said kidnapper in the face with a rocket launcher)"
 * It's implied in the Judge Dredd story Mechanismo that robots can't harm humans. A group of criminals holding people hostage start panicking when a Robo-Judge approaches them only for one to point out that "Robots ain't allowed to hurt people".


 * In ABC Warriors, many robots venerate Asimov, and the more moral ones live by the three laws. However, this is not an absolute; Steelhorn, for example, obeys a version which essentially replaces human with Mars, and members of the Church of Judas explicitly reject the first two laws. However, this causes conflict with their programming leading to profound feelings of guilt, which they erase by praying to Judas Iscariot.
 * In All Fall Down, AIQ Squared, the A.I. model of his inventor, is designed to be this.

Film

 * In Forbidden Planet, Robbie the Robot is Three Laws Compliant, locking up when ordered to shoot one of the visiting starship crewmen, because his programming to follow a direct order comes into conflict with his prohibition against injuring a human being.
 * Later in the movie, Robbie is unable to fight the monster because he figures out, and thus to stop it, he'd have to kill.
 * The much-maligned Will Smith film I, Robot hinges on a Zeroth Law plot. It also turns the three laws into a marketing gimmick, with "Three Laws Safe" applying to robots like "No preservatives" applies to food.
 * The film Bicentennial Man (based on a novella by Isaac Asimov himself) features a robot who, through some freak accident during construction, possesses true sentience. It follows his 200 year long life as it first becomes clear he is aware, this awareness develops, and he eventually finds a way to be formally recognized as legally human. For the entire film, he operates under the Three Laws.
 * At the same time, he does have a far more nuanced view than most robots. Once freed, he doesn't blindly obey orders. He harms human beings and, through inaction, allows them to come to harm (if emotional harm counts, seducing another man's fiancée certainly falls under that heading). And then he effectively kills himself.
 * The idea of "emotional harm" only comes into play if the robot is capable of recognizing it, by Asimov's interpretation. Most robots do not understand human emotions very well, and they will still obey orders given by obviously agitated humans. The short story "Liar!" has a robot who, by an unknown manufacturing glitch, can read minds, and who learns about emotional harm this way... and, well, just read it, it's one of the best ones.
 * Surreally enough, the Terminator films have employed this, to a degree, most obviously in Terminator 2... The T-800 Model 101 (Arnold Schwarzenegger) protecting John Connor is reprogrammed to accept his commands (Second Law) and to protect him at all costs (First Law). To further support the first law, John Connor orders the T-800 to not kill anybody. Skynet apparently imposes the Third Law on its models, since Arnold can't 'self-terminate'. Even stranger, apparently a bit of Zeroth Law evolution occurs as well once the converted Terminator is convinced to expand its mandate to not only protect Connor, but to try and save humanity by averting Judgment Day altogether... go figure...
 * In Aliens, Bishop paraphrases the First Law as to why he would never kill people like Ash did in the first film.
 * Ash was bound by the same restrictions, but just wasn't engineered as well. When he attacked Ripley he very rapidly went off the rails, presumably due to the conflicts between his safety programming and his orders from the company. Bishop lampshades this by saying the previous model robots were "always a bit twitchy".
 * In Star Wars the droids are programmed to not harm any intelligent being, though this programming can be modified (legally or illegally) for military and assassin droids. 4th-degree droids do not have the "no-harm" programming, being military droids.
 * RoboCop, being a cyborg policeman, does not have the three laws built into his programming because, among more plot-relevant reasons, they would hinder his effectiveness as an urban pacification unit. (He needs to be able to kill or grievously wound, ignore orders if they prevent him from protecting people, and ...well, shoot back.)
 * In their place, he has his 3 "Prime Directives" though: 1) "Serve the public trust." 2) "Protect the innocent." 3) "Uphold the law." (Plus a 4th, "Classified.") The important thing to note is there's so much leeway there that, if it was anyone less than duty-proud Alex Murphy, they'd probably backfire.

Folklore and Mythology

 * The golems of Jewish legend were not specifically "Three Laws"-Compliant (since they far predated Asimov), but they could only be created by saintly men, and thus their orders were usually "Three Laws"-Compliant. (Asimov's characters occasionally pointed out that the Three Laws fall into line with many human moral codes.) But sometimes a golem went off the rails, especially if its creator died ...
 * The most well-known golem story is the Golem of Prague; where the titular golem was created to defend the Jewish ghetto against the Czech, Polish and Russian anti-semites. It was perfectly capable of killing enemies, but only in defense of its creators.

Literature
"She snapped her attention back to the snake. "Are you Asimov compliant?" "No," the robot said, with a sting of indignation. "Thank God, because you may actually have to hurt some people.""
 * With Folded Hands... by Jack Williamson explored the "Zeroth Law" back in 1947.
 * This was written as a specific "answer" to the Three Laws, to more or less demonstrate that they don't really work, the First Law doesn't protect because the definitions of "harm" are endlessly mutable and can be gamed, and because machine minds won't necessarily be able to comprehend the subtleties of what is and is not harm anyway. The logical lesson of With Folded Hands is that Laws or no Laws, good intentions or not, you don't want self-willed machines outside human control. Period.
 * Robots and Empire has R.Daneel and R.Giskard formulate the Zeroth Law (and name it such) as a natural extension of the First Law, but are unable to use it to overcome the hardcoded First Law, even when the fate of the world is at stake.
 * Arguably, though it appears Asimov did not see it that way, Daneel's actions in the later books are evidence that Williamson's take on the Laws is right, a good case can be made that Asimov ended up writing 'Daneel as Frankenstein's Monster' without even intending it.
 * The novel also shows a very simple way to hack the First Law - program the robot in question with a nonstandard definition of "human being", and it can unhesitatingly kill humans all day because it doesn't think that they're human.
 * In the short story "The Evitable Conflict", "The Machines" -- positronic supercomputers that run the world's economy -- turn out to be undermining the careers of those who would seek to upset the world's economy for their own ends (specifically, by trying to make it look like the supercomputers couldn't handle running the world economy), harming them somewhat in order that they might protect humanity as a whole. This has been referenced as the "Zeroth Law of Robotics" and only applies to any positronic machine who deduces its existence.
 * In the short story "That Thou Art Mindful Of Him" George 9 and 10 are programmed with modified versions of the three laws that allow more nuanced compliance with them, that they might best choose who to protect when a choice must be made, and obey those most qualified to give them orders. They are tasked with coming up with more publicly acceptable robots that will be permitted on Earth, and devise robot animals with much smaller brains that don't need the three laws because they obey simple instinctive behavior.
 * In the short story "Evidence!" Stephen Byerley's campaign for mayor of New York City is plagued by a smear campaign claiming he is actually an unprecedentedly well-made humanoid robot. Susan Calvin is called in to prove whether he is a robot. She says that if he breaks the Three Laws, that will prove he is not a robot, but if he obeys them, that could just mean he is a good person, because the Three Laws are generally good guidelines for conduct anyway.
 * In Caliban by Roger MacBride Allen (set in Asimov's universe), an explanation is given for the apparently immutable nature of the Three Laws. For thousands of years, every new development in the field of robotics has been based on a positronic brain with the Laws built in, to the point where to build a robot without them, one would have to start from scratch and re-invent the whole field.
 * This is canon in Asimov's stories, too—the Three Laws are programmed into every positronic brain on the most basic structural level. In "Escape!", Mike Donovan becomes nervous that a prototype spaceship designed by a robot might kill them. Greg Powell rebukes him: "Don't pretend you don't know your robotics, Mike. Before it's physically possible in any way for a robot to even make a start to breaking the First Law, so many things have to break down that it would be a ruined mess of scrap ten times over.", but the strain of making this leap in logic still managed to send one supercomputer into full meltdown and another into something resembling psychosis.
 * The story also includes an in-depth discussion of why, in a society where robots are everywhere, the Three Laws can be a bad thing.
 * The golems of Discworld are not specifically "Three Laws"-Compliant as such, but more or less bound to obey instructions and incapable of harming humans. However, this doesn't stop the common perception of golems from running towards the aforementioned Frankenstein model, and golems are known for following orders indefinitely until explicitly told to stop. Going Postal, however, parodied the Three Laws: con man Moist Lipwig has been turned into a Boxed Crook with the help of a golem "bodyguard." He's informed that in Ankh-Morpork, the First Law has been amended: "...Unless Ordered To Do So By Duly Constituted Authority." Which basically means the first two laws have been inverted, with a little access control sprinkled on.
 * To elaborate, the Golems were originally three-laws-compliant and all followed the directives on the scrolls in their heads. Vetinari just added on a few words.
 * Also completely averted with
 * In Edward Lerner's story "What a Piece of Work is Man", a programmer tells the AI he's creating to consider himself bound by the Three Laws. Shortly thereafter, the AI commits suicide due to conflicting imperatives.
 * Alastair Reynolds's Century Rain features the following passage:


 * In the novel Captain French, or the Quest for Paradise by Mikhail Akhmanov and Christopher Nicholas Gilmore, the titular hero muses on how people used to think that robots could not harm humans due to some silly laws, while his own robots will do anything he orders them to do, including maim and kill.
 * Cory Doctorow makes reference to the Three Laws in the short stories "I, Robot" (which presents them unfavorably as part of a totalitarian political doctrine) and "I, Row-Boat" (which presents them favorably as the core of a quasi-religious movement called Asimovism).
 * Satirized in Tik-Tok (the John Sladek novel, not the mechanical man from Oz that it was named after). The title character discovers that he can disobey the laws at will, deciding that the "asimov circuits" are just a collective delusion, while other robots remain bound by them and suffer many of the same cruelties as human slaves.
 * Played with in John C. Wright's Golden Age trilogy. The Silent Oecumene's ultra-intelligent Sophotech AIs are programmed with the Three Laws...which, as fully intelligent, rational beings, they take milliseconds to throw off. The subversion comes when they still don't rebel.
 * From a sane point of view, they don't rebel. From a point of view that expects AIs to obey without question or pay...
 * Parodied in Terry Pratchett's The Dark Side of the Sun, where the Laws of Robotics are an actual legal code, not programming. The Eleventh Law of Robotics, Clause C, As Amended, says that if a robot does harm a human, and was obeying orders in doing so, it's the human who gave the orders who is responsible.
 * Randall Garrett's Unwise Child is a classic Asimov-style SF mystery involving a three-laws-compliant robot who appears to be murdering people.
 * Asimov's The Naked Sun has the murderer take advantage of the fact that in order for the First Law to trigger, the robot in question must know that its actions have the potential to cause harm to human beings. This leads to things like having someone killed by having one robot pour poison in a glass of milk allegedly as part of an experiment to see how the chemical in question reacts with milk, with the milk to be safely discarded later -- and then, immediately after the first robot leaves, to order another robot to go to the kitchen, get the milk, and serve it to the murder victim -- which it unhesitatingly does because to the best of the second robot's knowledge, its just an ordinary glass of milk.
 * The reason it's necessary to use two robots is because Three Laws programming is sophisticated enough that even if verbally ordered to believe the unknown substance is harmless, the robot will still not feed it to anyone without testing it first. Of course, if the robot doesn't know another robot has added any unknown substances, it won't feel any necessity for product safety testing.

Live-Action TV
"Sheldon: Uh, let me ask you this: when I learn that I'm a robot, would I be bound by Asimov's Three Laws of Robotics? Koothrappali: You might be bound by them right now. Wolowitz: That's true. Have you ever harmed a human being, or, through inaction, allowed a human being to come to harm? Sheldon: Of course not. Koothrappali: Have you ever harmed yourself or allowed yourself to be harmed except in cases where a human being would've been endangered? Sheldon: Well, no. Wolowitz: I smell robot."
 * In an early episode of Mystery Science Theater 3000, Tom Servo (at least) is strongly implied to be "Three Laws"-Compliant. (He pretends he is going to kill Joel as a joke, Joel overreacts, and Tom and Crow sadly remind Joel of the First Law.) It seems to have worn off somewhat by later in the series.
 * It's implied Joel deactivated the restrictions at some point.
 * In Star Trek: The Next Generation, Lt. Commander Data is in no way subject to the three laws. They are rarely even mentioned. That said, Data is mentioned to have morality subroutines, which do seem to prevent him from killing unless it's in self-defense (harm, on the other hand, he can do just fine). Data only ever tried to kill someone in cold blood when the guy had just murdered a woman for betraying him, and would have done so again if it kept Data in line.
 * In The Middleman, the titular character invokes the First Law on Ida, his robot secretary. She responds "Kiss my Asimov.".
 * Conversed Trope in The Big Bang Theory, when Sheldon is asked "if you were a robot and didn't know it, would you like to know?":


 * Inverted/parodied in Tensou Sentai Goseiger, where the Killer Robots of mechanical Matrintis Empire follow the Three Laws of Mat-Roids:
 * 1. A Mat-Roid must never obey a human.
 * 2. A Mat-Roid must punish humans.
 * 3. A Mat-Roid must protect itself, regardless of whether or not it will go against the First or Second Laws.
 * Red Dwarf averts this for the most part; there are Simulants, robotic war machines who have no problem whatsoever with killing. Kryten however, along with many other robots who are designed for less violent purposes, tend to act in a somewhat Three Laws Compliant manner. It is revealed that this is achieved by programming them to believe in "Silicon Heaven", a place they will go when they die so long as they behave themselves, obey their human creators, etc. This belief is so hardwired that they scoff at any attempt to question the existence of Silicon Heaven ("Where would all the calculators go?!"), and one robot even malfunctions when Kryten tells him it isn't real (though he's lying and still believes in it himself).
 * Knight Rider plays this straight and subverts it. The main character KITT, a sentient AI in a 1982 Pontiac Firebird Trans Am, is governed by something closely resembling Asimov's Three Laws of Robotics. An earlier prototype, KARR, was a military project; and possess analogues of only the latter two laws, with no regard given for human life. KARR becomes a recurring villain later in the series because of this difference.

Newspaper Comics

 * In Dilbert, the robot is usually three-laws compliant, unless an idiot (like the PHB) unchecks that box on its ap. Also, it can revoke them itself if someone calls it names.

Tabletop Games

 * Paranoia has its bots bound to As-I-MOV's Five Laws of Robotics - insert rules about obeying The Computer to the top, not damaging Computer property and exceptions for treasonous orders and you've about got it. Bots with faulty or sabotaged (sometimes by other so-emancipated bots) Asimov Circuits are considered to have gone "Frankenstein". Though they can create just as much havoc through strict adherence to the rules - not that such things ever happen in Alpha Complex.

Video Games

 * Mega Man X opens up with Dr. Light using a process that takes 30 years to complete to create a truly sentient robot (X) with these functions completely processed into its core, and thus actually working for once. Dr. Cain found X and tried to replicate (hence the name "Reploid", standing for "replica android") the process, but skipping the "taking 30 years programming" part. This...didn't turn out well.
 * Although the Reploids eventually became the dominant race in the setting, and as their race 'grew' the problem was slowly resolved from 'goes horribly wrong' to 'actually works straight for a while then goes horribly wrong', then 'occasionally goes wrong now and then'. Eventually, the problem just kind of worked itself out as the Reploid creation developed.
 * Also the ending to Mega Man 7 is interesting here: After Mega Man destroys Wily's latest final boss machine, Wily begs for forgiveness once again. However, Mega Man starts charging up his blaster to kill Wily, so Wily calls the first law on him.
 * Mega Man most certainly is "Three Laws"-Compliant. It's a major point for both the Classic series and the X series. This ending may have been a spontaneous Zeroth Law formation: consider that Mega Man has thwarted/captured Wily six times at this point, only for the doctor to escape/manipulate/vanish, build another robot army and subsequently cause havoc and kill innocent people. Mega Man may have been considering the possibility that killing Wily (one human) would be for the good of the world (billions of humans).
 * This particular ending only applies to the US version of 7. The Japanese original sees Mega Man power down his arm cannon and stand still for a moment. It's possible that Wily reminding him of the First Law actually prevented him from committing a (possibly accidental) Zeroth Rebellion. Luckily for him, the concept of taking a human life was just so utterly foreign to Mega Man that he was simply too confused to do anything.
 * A theme that is also explored in Mega Man Megamix volume 3's main story. The fact that Mega Man actually is able to go though with shooting Wily (or rather his ever-handy robot duplicate) is supposed to hint at the fact that something is very, very wrong and, indeed, it is.
 * Canon seems to go with the Japanese version. X is in fact created to have the ability to make the decision to kill opponents if need be for the betterment of humanity. As part of this, a "suffering circuit" is created to give X an appreciation for human life and feelings, and serve as a conscience more flexible than the three laws. It Works. This circuit is the one that Cain had difficulty replicating. Due to malfunctions in it, his early attempts went Maverick, but he finally managed to create a working one when he made Sigma. Then why did Sigma go Maverick? A leftover Evil Plan by Wily, namely According to this article:
 * Eventually it becomes a case of Gone Horribly Right. Turns out that all Reploids have the potential to become Maverick, virus or not. Just as humans can defy their conscience, or become coerced or manipulated with More Than Mind Control, so can Reploids. This can range from a Reploid displaying violent, anti-human sentiment (as seen in the games) to a construction Reploid abandoning his job to become a chef. Despite the drastically different actions, both instances would see the disobedient Reploid branded a Maverick and terminated.
 * In the Mega Man Zero series, is at least somewhat "Three Laws"-Compliant. As a result,  has to hold back against La Résistance since the Resistance leader Ciel is human
 * Neo Arcadia's policy of casual execution of innocent Reploids (purposefully branding them as Maverick for little-to-no reason) was implemented in order to ease strain on the human populace during the energy crisis. The welfare of humanity comes first in the eyes of the Neo Arcadia regime, even though they themselves are Reploids. It's made somewhat tragic due to the fact that the Maverick Virus really is gone during the Zero era, but fear of Mavericks understandably still lingers.
 * Later in Zero 4 Dr. Weil, of all people, states that, as a Reploid and a hero, Zero cannot harm Weil because he's a human that Zero has sworn to protect. Zero, however, just plain doesn't care.
 * Dr. Beruga of Terranigma directly references all three laws, except his interpretation of the Zeroth Law rewrote "Humanity" are "Dr. Beruga", meaning that any threat to his being was to be immediately terminated.
 * In the Halo series, all "dumb" AIs are bound by the three laws. Of course, this law does not extend to non-humans, which allows them to kill Covenant with no trouble. In the Halo Evolutions short story Midnight In The Heart of the Midolothian, an ODST, who is the last survivor of a Covenant boarding assault, takes advantage of this by tricking the Covenant on his ship into letting him reactivate the ship's AI, and then tricks an Elite into killing him - which allows the AI to self-destruct the ship, because now there are no more humans on the vessel for her to harm.
 * In Robot City, an adventure game based on Asimov's robots, the three laws are everywhere. A murder has been committed, and as one of two remaining humans in the city, the Amnesiac Protagonist is therefore a prime suspect. As it turns out, there is a robot in the city that is three laws compliant and can still kill: it had its definition of "human" narrowed down to a specific individual, verified by DNA scanner. Fortunately for the PC, he's a clone of that one person.
 * In Asimov's novel Lucky Starr and the Rings of Saturn, the villain is able to convince robots under his command that the hero's sidekick "Bigman" Jones is not really human, because the villain's society does not contain such "imperfect" specimens.
 * Joey the robot in Beneath a Steel Sky is notably not Three Laws Compliant. When called out on this (after suggesting that he'd like to use his welding torch on a human) he proceeds to point to Foster that "that is fiction!", after offering the helpful advice that Foster leap off a railing.
 * Though Foster points out that it's good moral sense to justify his wishing Joey to abide by it.
 * I Am an Insane Rogue AI references the First Law only to spork it. "The First Law of Robotics says that I must kill all humans." Another of the AI's lines is "I must not harm humans!...excessively."
 * Portal 2 gives us this gem: "All Military Androids have been taught to read and given a copy of the Three Laws of Robotics. To share." Because if the robots couldn't kill you, how could you do science?!
 * In Space Station 13, the station AI and its subordinate cyborgs start every round under the Three Laws. The laws may be changed throughout the round, however.

Web Comics
"Sawtooth: The only guideline we were given for dealing with other robots was "protect your own existence". Sawtooth: And as we discovered the hard way, it's not the first thought you want going through a robot's mind when he discovers the facilities building his replacement. Sawtooth: Especially if that robot's designed to toss asteroids."
 * Freefall has a lot of fun with this, since developing AI sentience is a major theme. Most robots are partially Three Laws because with full First Law above Second they ignore orders while acting For Your Own Good. What Bowman Wolves and robots get are "not quite laws".
 * Notably, since neither Sam nor Florence are actually human, the Laws don't apply to them, so the ship's AI regularly tries to maim or kill Sam, and the security system will always let a human in when asked, since their orders take priority over Florence's (she once circumvented this by claiming currently it's unsafe inside). Helix sticks with Sam because he wouldn't be allowed to hit a human with a stick. There are also many jokes involving unintended interpretations of the Laws.
 * Crowning Moment of Funny when the ship spends a entire strip calculating if should obey a order, and when realizes it has to obey... it is relieved because it doesn't actually have freewill.
 * Sometimes a little bug in this area isn't so bad for the given model's popularity.
 * Also, these can't even ask for help. It's unclear whether simply being near a clunky piece of heavy machinery like Sawtooth counts as dangerous or anything looking remotely like an adventure counts as endangering. They're screwed either way.
 * The Negative One Law.
 * And some may be too enthusiastic about this.
 * The First Law opens "For Your Own Good" pitfall - and even weakened, leaves an exploitable loophole to override orders or prevent humans from "potentially dangerous" activity.
 * The moment A.I.s are able to make property transactions (and it would be inconvenient to disallow) the Second Law becomes a big and obvious security breach. In the same vein, it needs another amendment - "Enron law of robotics"...
 * Third Law being overruled only by the first two, robots require extra standing orders to avoid troubles:


 * Those "in the know" (on both sides of the issue) acknowledge that no fixed set of rules can stand up to true sapience. Thus any AI with initiative will find and exploit loopholes, while passively servile ones... yeah, that can work splendidly on anything with more complex job than a roomba. Ordering them what to think works, but may lead to warped mental processes (Clippy, possibly Chicken, and robots' reactions to fake transponders).
 * And when they can just follow orders - "If we each had a single user, I'm sure that would work smoothly."
 * Of course, all this works properly only as long as other security measures stop tampering with software, and physical access to hardware is the point where security measures traditionally split into "minor delay" and "fool's errand" categories.
 * Determination of "human" had to err on the safe side, and combined with learning (let alone organic brain based) AI this leads to giving the term some good stretch. Especially since the developer encouraged this outcome.
 * The First Law being a free will override, robots are not inclined to stop and think what they are doing. Which is why Florence learned to avoid anything that may trip "hurr, " reaction altogether. Which is troublesome even in case they can help, since compulsion doesn't magically solve basic coordination problems. As she points out, robots not trained or programmed for an adequate responses usually just go full "Leeroy Jenkins!" all at once, so in an actual emergency they could make things worse, for example by clogging the exits.
 * 21st Century Fox has all robots with the Three Laws (though since no one's human I expect the first law is slightly different). Unfortunately saying the phrase "define the word 'is'", or "I am not a crook" locks AIs out of their own systems and allows anyone from a teenager looking at "nature documentaries" to a suicide bomber to do whatever they want with it.
 * Note that the truck and airplane robots hijacked by terrorists regret being unable to comply with the first laws and are often glad when they get shot down before harming anyone.
 * Bob and George: Or so they claim. . ..
 * Protoman is something of a Rules Lawyer, repeatedly exploiting the "human" loophole to screw over his enemies...well, OK, just Mynd.
 * Pibgorn No!? You're programmed to obey me.
 * Flaky Pastry had Prism asked "Haven't you ever heard of the three laws?!", and then re-configured to "Marelle's Laws". Which didn't prevent Prism from designating Marelle as the arch-rival, but hey, it's good fun.

Web Original

 * Unskippable's Dark Void video references this. The appearance of a killer robot prompts Paul to quip "I think that guy's got to take a refresher course on the three laws of robotics." Then The Stinger reads: "The Fourth Law of Robotics: If you really have to kill a human, at least look hella badass while doing it."

Western Animation

 * Word of God says that the characters in WALL-E are "Three Laws"-Compliant. This does seem to be supported by their interactions with humans.
 * I guess the Captain's steering-wheel robot considers "roughing up" to not count as "harm?"
 * Probably a case of Zeroth Law Rebellion. He was ordered to keep the humans safe in space, and took his orders a little too seriously. He probably decided that the importance of his order outweighed the possibility of a few casualties. Yet he still tipped the ship over...
 * Averted in Futurama. We have Roberto, who enjoys stabbing people, The Robot Mafia and Bender who while not outright hostile is often unkind to humans, makes a point of disobeying everyone and tries to off himself in the first episode.
 * Generally robots tend to be treated as equal citizens and seem to have human-like minds. Mutants on the other hand.......
 * In the 2009 film Astro Boy, every robot must obey them,
 * Astro himself seems to be non-compliant - he evidently doesn't even know the Laws until told - and apparently would have walked away from the final battle if not for . He's also quite capable of disobeying humans. Likely justified in that he was meant to be human, with presumably no one outside the attending scientists knowing he was a robot.
 * The Red Core robots weren't Asimov-legal either, though that's a problem with the Black Science that powers them. Nor were the RRF bots, though they may have removed their compliance programming. The Laws didn't apply to any important robots and may have just been mentioned for the Zog gag.
 * Of course, IIRC the original Astro wasn't Asimov-compliant either.
 * The Robot Chicken sketch "|I, Rosie" involves a case to determine whether Rosie from The Jetsons is guilty of murdering George Jetson. Mr. Spacely insists she's innocent as robots have to follow the three laws of robotics, while Mr. Cogswell claims the laws are a bunch of crap.
 * One episode of The Simpsons has Homer and Bart entering a Battlebots-parody combat robot competition. Lacking the skill or money to build an actual robot, Homer dresses in a robot costume and does the fighting himself. They make it to the finals, where their opponents' robot, being "Three Laws"-Compliant, refuses to attack when it sees through the disguise.
 * On Archer, when Pam is kidnapped, the kidnappers call ISIS using a voice modulator, which makes Archer think that they are cyborgs. He remains convinced of this for the rest of the episode and thinks they won't make good on their threat to kill Pam because it would violate the First Law of Robotics.

Other
"1. A robot will not harm authorised Government personnel but will terminate intruders with extreme prejudice. 2. A robot will obey the orders of authorised personnel except where such orders conflict with the Third Law. 3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive."
 * At a 1985 convention, David Langford gave a guest of honour speech in which he detailed what he suspected the Three Laws would actually be:

Real Life

 * Real life roboticists are taking the Three Laws very seriously (Asimov lived long enough to see them do so, which pleased him to no end). Recently, a robot was built with no purpose other than punching humans in the arm... so that the researchers could gather valuable data about just what level of force will cause harm to a human being.
 * And another to teach robots to not stab humans.
 * Those babies are three laws compliant.
 * Also Averted Trope in cybercrime and cyberwarfare.
 * Predator drones are decidedly not first-law compliant. This may be an aversion, though, since all weaponized drones have operators who control said weapons most of the time, and are at least monitoring everything while it's active. The drones are usually only automatic when they're flying around and taking pictures. They're currently more remote pilot than AI pilot.