2010: The Year We Make Contact/Headscratchers: Difference between revisions

no edit summary
(clean up)
No edit summary
 
(7 intermediate revisions by 2 users not shown)
Line 12:
** Also note that the second sun had effects on Earth that helped avert a nuclear war. Apparantly the people of the monolith are not done with the human race even though we have throughly mastered tool use and fire.
 
* If Bowman could order HAL to relay the Firstborns' message to Earth by saying "Accept Priority Override Alpha", then in 2001 why couldn't he have said "Accept Priority Override Alpha: [["Three Laws "-Compliant|You shall not harm a human being, nor through inaction allow a human being to come to harm]]"?
** Because Bowman got the chance to reboot (or the equivalent) with HAL. The original crew didn't have the contingency covered that their AI would become murderous.
** And he was malfunctioning-who's to say he would have accepted an override of any sort?
*** Still, I agree it's weird that he didn't even try. Of course, [[Word of God|according to]] [[Arthur C. Clarke|the author,]] each novel is supposed to be in its own continuity that just happens to mostly match up with the other books, so if we apply the same logic to the movies, it explains why Bowman didn't try the override command; maybe there ''was'' no override command in the first movie. (It also explains why the flatscreen displays aboard ''Discovery'' have mysteriously turned into CRT monitors nine years later.)
**** Presumably there's a difference between radio override commands and "please don't kill me please" commands. HAL's designers might not have foreseen the need for the latter.
 
Line 24:
*** Not quite without a second thought-the Star Child was sent to Jupiter to see if there was life there, so they presumably cared enough to find out that it existed. They even weighed them against the Europans, debating the chances for intelligence developing in either place. Jupiter's ecosystem was found wanting.
* Wouldn't Lucifer's presence throw off Earth's orbit? [[Art Major Physics|I don't know much about physics]], but from what I've heard of binary star systems {{[http|//www.solstation.com/images/bi1sep.jpg diagram}}], you basically have three possibilities:
{{quote| 1) the stars are really close together (within 5 AU), and the planets orbit the pair from far away,<br />
2) the stars are further apart (50+ AU; Pluto at its furthest is 49 AU from the Sun), and the planet orbits one of them, or<br />
3) the planet follows an irregular orbit between the stars, possibly getting thrown out into the void someday. }}
** Lucifer has the same mass as Jupiter (likely smaller due to the conversion process), so the gravity shouldn't be that different.
Line 31:
* SO according to this movie (and I presume the book) the reason why HAL killed off the crew if the first movie was that he had been ordered to keep the true purpose of the mission a secret, but his core programming prevented him from lying or withholding information. HAL decided to kill the humans so he wouldn't have to lie to them.
 
The [[HAL 9000]] is supposedly the most advanced computer and AI available to man yet apparently no one checked how it would act when given conflicting directives? This is the kind of thing they teach you about in undergraduate (if not high-school) level computer science. Didn't the supposed genius Chandra think of this? Does HAL Laboratories even employ a QA team that isn't made up of a bunch of stoned monkeys? Any half-way decent test plan would have caught this. HAL should have been programmed to immediately reject any order which causes this kind of conflict.<br /><br />So, okay, let's say Chandra is an [[Absent -Minded Professor]], and QA somehow missed this obvious bug. So HAL ends up with conflicing directives. His [[Sarcasm Mode|perfectly logical solution]] to avoid lying to the crew is... to kill them so that he then won't have to lie to them any more. Again, [[Flat What|what.]] Not only does he have to ''lie to the crew to accomplish this goal in the first place'', but his plan fails spectacularly and the entire mission is almost FUBAR'd. The most advanced AI, considered superior to humans in many ways, and this was the best plan he could come up with?! How about, "Hey Dave, Frank, there's something very important I have to tell you. Due to the current mission parameters, I am unable to function effectively until we reach Jupiter. I'm sorry, but I cannot elaborate. I will deactivate myself now. I realise this will put a strain of the mission, but it is vitally important that you do not attempt to reactivate me until we reach our destination. I will be able to explain then. Shutting down..." That would leave the entire crew alive, HAL in perfect working order once Discovery reaches Jupiter, at the cost of loss of the computer for the ''most uneventful part'' of the mission - a mere inconvenience.<br />
* In the movie, Chandra plainly stated that HAL could complete the mission objectives independently if the crew were killed. Since HAL was handling all the logistics of taking care of the ship, it would have decided that its precise computational ability to run everything would ensure a more successful mission than if the crew ran the ship by themselves.
<br />Basically, either the reason for HAL going psycho is pure BS, or HAL was built, programmed, and tested by a bunch of idiots.
Line 38:
 
{{reflist}}
[[Category:Two Thousand Ten The Year We Make Contact{{BASEPAGENAME}}]]
[[Category:Headscratchers{{SUBPAGENAME}}]]
__NOTOC__