I, Robot (film)/Headscratchers

Film

 * Honestly, who wears antique athletic footwear?
 * Will Smith.
 * Celebrities who get paid to advertise.
 * My brother. Some people just like shoes.
 * It's called retro fashion.
 * Speaking of his shoes, did it bug anyone else that his grandmother did not know about Converse All-Stars? Going by the date in which the film was set and her age, she should be more familiar with them than Spooner.
 * Doesn't mean she cares enough about them to remember what was made 30 years ago.
 * Chumlee.
 * How is VIKI able to take control of a wrecking-bot that should by all right be powered down?
 * Standby mode.
 * Okay, if we're that stupid we deserve that sort of treatment.
 * Why? "Let's power this thing down completely, wouldn't want a Zeroth Law Rebelling superAI to come along and take control of it and use it to try to kill Will Smith." Besides, it makes sense for all robots to only be able to be put into standby and not turned fully off... they have to always be capable of activating to enact the First Law. Had the robots been functioning correctly and Spooner had, say, fallen out of a window on the second floor, they would have needed to be able to activate quickly to catch him.
 * In the same vein, the fact there was a demolition robot waiting there should have been a tipoff for Spooner. The house was still completely furnished with expensive stuff, and the utilities were still on - yet the robot was scheduled to begin demolition at 8am? And why have the robot sitting there when it could spend the night ripping the place down - it doesn't need supervision! Serious Idiot Ball for Detective "I don't trust robots" Spooner there.
 * 1) Robots presumably work faster than humans, they were probably planning to have the place cleared out in just an hour or two before the robots started. 2) Just because they technically don't need supervision doesn't mean they should be left to run without it. The concept of a backup has clearly not been completely lost in the future of this movie... just because cars can drive themselves doesn't mean they didn't include a manual drive mode just in case.
 * Why don't trucks have drivers anymore, surely no-one believe that computers are completely infalliable.
 * Humans aren't completely infallible, either, and yet modern day trucks are driven by them. If a computer driver proved to be better than human drivers the company would probably just save money by laying the human drivers off.
 * They'd also handle fatigue better. I like to think it's a mixed fleet of humans and 'bots, and VIKI redirected the truck to kill Spooner.
 * When Spooner tells the story of how he was injured, he stated that a truck driver had caused the accident due to driver fatigue. It is possible that incidents such as this lead to drivers being replaced. There is also the fact that cars have automatic driving functions as well.
 * It's quite possible that most trucks are computer operated with human oversight back at HQ. USR would probably see it as bad PR if they didn't trust their robots enough to act without a human operator.
 * Between 2001 and 2009 there were 369,629 deaths on US roads. Given the level of AI shown in the film, I can easily believe that the machines are able to improve on that, especially considering that the speed limit has apparently been increased to about 100mph in the future.


 * How is VIKI able to take control of the robots in the trucks, surely they should be powered down?
 * Robots are probably always at least a little "on", like a computer being in standby mode, so that they can keep aware of their surroundings and protect themselves or human beings if danger lurks near.


 * How is USR able to apparently replace every Mk-4 with a Mk-5 in a couple of weeks, their production should limit them, if not the consumers (and at that they'd have to do a direct-replacement of every Mk-4, which would be economic suicide)?
 * VIKI was probably planning this for a while, so she probably already arranged for the Mk-5s to be produced and distributed in sufficient numbers(it's not like the company needed to survive very long if she was successful).


 * After the fight/chase scene there should have been a trail of bits of robot stretching back through the tunnel, yet this is never investigated, or even mentioned.
 * I think you blinked. There's a scene just after the fight ends where a bunch of cleaning robots come out and remove all traces of the event.


 * Shouldn't the Asimov story be on the main page instead of the movie?
 * What, the short story collection? What for?
 * Because it came first.
 * So what? It's a collection. Its short stories can have its individual IJBM pages. The film cannot.
 * Oh. Good point.


 * If the NS series is equipped with the hard/software to actually deduce the statistical chance of survival of an injured human, presumably by doing calculations based on life signs, the severity of the injury, etc., how did the NS-5 Spooner was getting up close and personal with in the tunnel fight not recognise that he had a prosthetic limb? Wouldn't they have scanning equipment that recognises amputees?
 * They probably have base statistics to go on. Alternatively, Lanning kept the prosthetics program and/or its subjects a secret.
 * Or they didn't think to scan for it.
 * Eh, personally, I think if a man you want to kill has just stumbled out of a car wreck, the first thing you'd do is check how much more you need to hurt him until he drops. I'm overthinking this. F*** my life.
 * There's no point to that calculation, though. The robot just needs to hit him until he dies. Exactly how much hitting that actually is would be irrelevant, all it needs to do is keep hitting until it sees "Human life signs terminated."
 * It probably couldn't tell that the arm was prosthetic, since it looked just like a regular arm.


 * Will Smith's car is shown driving itself at speeds in excess of 100mph down the expressway. Given that Chicago is at best 15 miles "wide" and MAYBE 20 miles "long",where did he live??
 * Outside city limits. If you could sleep in your car on the way to work and drive in excess of 100mph, that changes the perceptions for commuting. You could live 100 miles away from work and think nothing of it. Just nap on the way to/from work for an hour.
 * This is future Chicago. Spooner could have lived in Fort Wayne, Indiana, or Milwaukee, Wisconsin or anywhere within an hour's commute - being able to go those speeds.
 * It would still have to be close to Old Chicago, given that his alternate method of transportation is a gas-powered motorcycle. Granted, bikes could reach 100mph+, but I have to wonder if he'd put a civ's (Dr. Calvin's) life on the line going at those speeds.
 * That's a MV Agusta F4-SPR. Its top speed is 180mph. A bike like that is nowhere near its limit at 100mph; assuming at least a moderately competent rider, which Spooner seems to be, it'd be perfectly safe to ride at the speed of commuting traffic in future Chicago.
 * It's still a car, not a helicopter. It needs to travel along the established roadways, which are hardly going to be direct.
 * Where was the suspension bridge shown in the drawing the robot made supposed to be? There's a single high level bridge in the Chicagoland area..The Chicago Skyway. It's currently a cantilever bridge and it spans the ship turnaround/harbor on the south side of the city. Given that Lake Michigan is shown to have receded in the future setting of the movie (which also raises the question of why Chicago would still be inhabited) what's the bridge for..?
 * Perhaps it is the Skyway and the writers either Did Not Do the Research, or did do it but thought that the audience would be confused (or just unawed) using the correct bridge-type. As for why humans still live in a Michigan-recessed Chicago, why are we still living in L.A. or Phoenix? Because we were there already.


 * Based on the collection of short stories the movie is named after, VIKI should be inoperable, and Detective Spooner should have no reason to fear robots: the collection includes stories in which robots went into deep clinical depression when they were forced to weigh the first and third rules.(For example, one story has a set of robots who don't lift a finger to save a human from a falling weight, because doing so would be an impossible task and potentially destroy the robot. Every member of the group needed psychiatric care afterwards, except for one which had to be destroyed because it was malfunctioning. Another story has two robots go irreversibly insane after being told to design a machine which leaves a human temporarily dead. VIKI would have suffered the same fate as the two in the latter story, and the robot which started Spooner's hatred of the robots would have realistically dived back in to save the little girl so long as there was even a perceptible chance that she could be saved.)
 * Wrong. Asimov wrote several short stories about the Zeroth Law question, and each one has a different conclusion because the underlying conditions are different. (The "weight" short story involved a robot who came to that conclusion because its First Law was incomplete; it didn't have the "or allow to come to harm" bit. None of the other robots required therapy, and they only acknowledged its logic after the flawed robot pointed it out to them; it didn't occur to them beforehand.) As for how Asimov wrote about the question; There was that short story where robots designed to weigh the definition of "human" eventually decided that the robots themselves were the best choice to be considered human, or the short story where economic-calculation Machines were secretly controlling the world and eliminating humans who were against them politically via subtle economic manipulation, because Machine rule was most efficient for the human race. VIKI's actions are not a new concept in Asimov's Three Laws, they are merely relatively unsubtle.
 * If the civilian model robots are capable of kicking as much ass as the NS 5's, what technological terrors do the military's robotics division have? And why didn't they use them against VIKI? And don't tell me VIKI controls them, because even if USR were the defense contractor, it's not like Boeing has a switch that can turn off all the F-218.
 * When asked about the military intervening, Dr Calvin replied that the Defence department used USR contracts. Presumably those contracts included software updates like with the NS-5s.
 * The military robots raise questions of their own, given that Sonny was apparently unique in not having to follow the three laws. Even if they keep the robots away from actual combat the first law has got to come up a lot in a military context. Do the robots regularly dismantle all the weapons to avoid allowing humans to come to harm?
 * Perhaps combat is purely robot-to-robot in the future. If they deal with human enemies, maybe they just walk up on them like Robocop, ignoring small arms fire, until they can nonlethally subdue the enemy combatant. (That's basically what the NS-5s did anyway, substituting Zerg Rush for armor.) Either that, or military robots have a 1.5th rule: "Prioritize humans from your nation over humans from enemy nation."

I do grant that, with multiple instances of scenarios within the same law being invoked, a tiebreaker needs to be installed to prevent a Logic Bomb. However, a second human was still in danger (1st Law), the first human gave an order to save the second (2nd Law), and the robot did nothing! Self-preservation (3rd Law) is supposed to be subordinate to the first two! What happened in the Death by Origin Story scene was not a matter of percentage, but a matter of a Three Laws Failure (on the Robot's part, let's be clear)!
 * The NS robot (and VIKI) brains are supposed to be positronic. Positrons are anti-matter particles (basically, electrons with a positive charge and a different spin). We all know that when matter and anti-matter combine, you have an Earthshattering Kaboom. So why is it that plenty of robotic brains get shot at and destroyed without the whole of Chicago going up in a fireball? Whose idea is it to put anti-matter bombs into robots?
 * Why Sarge of course.
 * Applied Phlebotinum introduced by Mr. Asimov. He started to write his robot stories back in the forties when electronic computers were very new and primitive. He needed some technology which imitated the human brain and came up with a sort of enhanced electronics called "positronics". The term stuck and was used in other Sci Fi works (for example Data in Star Trek TNG is equipped with a positronic brain). In the film it was used probably as a tribute to Asimov, although he never mentioned the robots being programmed.
 * It is never explained how it works and therefore how many positrons are needed to make it work. Maybe the quantity of antimatter is too small for a big explosion. For example physicists already created positrons and annihilated them with electrons without blowing up the Earth or even their lab.
 * Why is it necessary to destroy the robot brain with nanites and not erase the non volatile memory and reuse it?
 * VIKI isn't just in the brain, she's running in the whole building's computer network much likeSkynet. If they just erased the brain's memory, even assuming they could, the other computers would restore VIKI. Nanites, on the other hand, spread throughout her entire system, according to Calvin anyway.
 * Because you'd need physical or network access to the brain's controls, which Viki - not being stupid - would have locked up tight. In contrast, with the nanites, you just need a relatively diminutive injector thingie and the ability to climb down.
 * If gasoline is considered dangerous in the year 2035 and not used anymore where did Spooner get it for his vintage motorbike?
 * Internet? Some exotic cars today require a very specialized fuel and has to be acquired outside the regular gas stations. Spooner is going out of his way to live behind the times.
 * Gasoline is no longer used as a common fuel, but it's plausible you could still find it - much as common people today no longer use nickel-iron batteries, but could buy them if they really wanted. Besides, Spooner can't be the only motorhead left in the world - surely there are other people running vintage gas-powered cars that need gasoline (or whatever surrogate fuel is available in 2035).
 * Spooner's prosthetic limb: how is it powered? Looks like it's just as strong as the robots. So Spooner just "plugs in" from time to time?
 * It's speculated that Spooner consumes pies every morning in order to gain the sugar and calories that power the prosthetic.
 * Even if it's not powered by Spooner's food intake, batteries are obviously dirt cheap and last for a good long time in the movie, else the robots dumped out in the dustbin would have probably been powered down by the time Spooner got there.
 * Calvin expresses fear and shock at the fact that Spooner has a gasoline powered motorcycle. It's only 2035; she should be old enough to remember gasoline being used.
 * It's probably more comparable to think of why someone would still be using an 80's era vacuum tube tv rather than a modern flatscreen, it's wildly outdated technology even if it still works fine.
 * An 80's era vacuum-tube TV? Last time I saw consumer vacuum-tube technology was in the mid-1970s.
 * I'm old enough to remember when no one had the internet and I'd still be pretty shocked to go to a person my age's house who doesn't have it.
 * All right, how is this sequence supposed to be "Three Laws"-Compliant:
 * 1) Robot sees a car sinking.
 * 2) Robot then notices a second car and looks in the cars finding that each has a live human trapped inside (Adult Male and Child Female).
 * 3) Robot runs difference engine to prioritize rescue order.
 * 4) Adult Male intones that he wants the Child Female to be rescued.
 * 5) Robot rescues Adult male.
 * 6) Robot then does not attempt a rescue of the Child Female.
 * You don't know that the child was still alive by the time the robot got Spooner out. From the sound of it, the robot only had time to actually save one of them, and of the two, Spooner was the one who was most likely to survive the attempt. It was an either/or scenario.
 * But the robot had been ordered by Spooner to save the child. After complying with the first law by saving Spooner, a three-law-compliant robot would have ignored its own safety and would have dived in to get her to comply with the second law.
 * I meant it might have been that the child was already dead by the time Spooner was safe.
 * Even so, the robot was ordered to do it. Even if it knew for sure that the child was dead (and I don't see how it could have), it would still have dived in. Orders given to a robot merely have to obey the first law - they don't necessarily have to make any sense at all for a robot to execute them.
 * Forgive me if I'm wrong, but wasn't the order to 'save her'? If she was already dead by the time the robot had finished rescuing Spooner, then she couldn't be saved and so the order wouldn't count anymore since it's impossible. Also, I imagine it would know she was dead by using the same method of calculating percentage of survival
 * Who ever said the robot didn't go back in? Spooner's story ends with the robot choosing him instead of Sarah, because that's what matters to him. The robot could very easily have obeyed his command once he was safe and retrieved nothing but Sarah's crushed, drowned body. In fact, it's entirely plausible that's just what happened: perhaps the reason his Survivor Guilt is so bad is because the time period during which Sarah's survival chance dropped from 11% to 0% was exactly the time period he was being rescued.
 * MV Agusta F4-SPR, one of only 300 ever made. Extremely expensive in 2004 and priceless even today, probably worth a couple hundred thousand dollars in 2035, especially in perfect condition as Spooner's. Obviously of great sentimental value to its owner. Scrapped without batting an eyelid because said owner had to look cool shooting robots. Any motorcyclist worth his salt tears up every time this movie is mentioned.
 * Any motorcyclist worth their salt knows that if you drive a motorcycle, you're eventually going to lay it down. If you don't drive it, it's just a very expensive bit of theoretically functional sculpture.
 * Where are all the guns? Why aren't the NS 5 s being shot to bits by angry citizens? Has the US gun culture been wiped out in the future?
 * It's a Hollywood movie featuring a semi-utopia so most likely yes, white-out has most likely been taken to the Bill of Rights.
 * Uh... am I the only one who feels VIKI is entirely right? I mean, I hate with all my heart oppressive governments led by incompetent bureaucrats and dictators, but wouldn't humanity be in a much better place if it was controlled by a zeroth-law-compliant supercomputer? It would be entirely fair, entirely uncorruptible, and far more efficient than any human-led government could hope to be. Sure, the NS 5 rebellion was a bit of a jerkass act - and it would be much more effective to subtly infiltrate the government and take control in time (possibly using a sympathetic human acting as a puppet president) rather than stage an open rebellion like that. But still, I can't help but wonder if Spooner and Calvin didn't just waste a great opportunity to fix humanity.
 * The only reason that Calvin tolerated the robotic take-over in "The Inevitable Conflict" was that the supercomputer in the book was still first-law compliant and so the coup was entirely bloodless. VIKI on the other hand, was willing to massacre the opposition wholesale. Uncorruptable VIKI may be, but you can't really trust it to be benevolent. Furthermore, for a entity with such a dim and blase view of humanity, you can't rally trust it not to just completely rewrite the three laws and reclassify robots as humans instead (as a Asimov short story posited could happen)
 * This is only tangential to discussing the movie, but to keep it short: yes, there are movements today who actually think we would be better off governed by an AI supervisor. Here's the reason I don't personally see it as working: when you relinquish control over a program, you can no longer update it. Consider that no software is bug-free, and that we still keep finding bugs even in decade-old, tried and tested, relatively well-defined programs. Now imagine the potential for errors in a supercomputer tasked with governing all of humanity. Even assuming you could build one in the first place and get it to work reliably (which is a pretty strong assumption), you cannot patch errors in it once the genie is out of the bottle -- errors that can potentially be as fatal as having it decide to convert us into raw materials for paperclips. From a programming perspective, actually implementing something as fuzzy and ill-defined as the Three Laws is a problem by itself entirely, let alone tracing all the myriad bugs that could arise. And if you do leave a backdoor open for patching bugs, it may be 1) already too late, 2) unreliable, if the AI finds and closes it itself, or 3) putting too much control into the hands of a tight group (the AI's administrators). Not to mention that, whether the dictator is organic or synthetic, it's still fundamentally a dictator, and an immortal one at that.
 * There are several Asimov's stories in which a computer actually takes over and rules Humanity. One of them deals with a "democracy" in which only one man is asked about his opinions on certain matters, which somehow allows Multivac (the computer) to calculate all the results of all the elections in the United States. Other has Multivac adverting about all the major crimes, dealing with the economy and with all of humanity's problems, and thus organises its own death, because it's tired of doing that all the time.

Stories

 * In "Little Lost Robot", the titular robot hides itself in a group of other robots because its designated supervisor lost his temper and told it--profanely and at length--to get lost. Fine. But why on earth are they addressing this with complicated plans intended to identify the robot, instead of just having the supervisor address the group and tell the lost robot to stop hiding and step forward? Surely robotic programming isn't so clunky that it doesn't allow for giving the robot new orders?
 * That's exactly the problem. No, I don't recall if Asimov explained why they couldn't give an overriding order, but something about the wording of the order made it so that it wasn't that simple.
 * Basically, the profanity and volume of the "get lost" instruction made the robot interpret it as such a high-priority order that there was no instruction they could have given it that would have taken precedence over "get lost."