Information for "2010: The Year We Make Contact/Headscratchers"

Basic information

Display title2010: The Year We Make Contact/Headscratchers
Default sort key2010: The Year We Make Contact/Headscratchers
Page length (in bytes)13,395
Namespace ID0
Page ID14402
Page content languageen - English
Page content modelwikitext
Indexing by robotsAllowed
Number of redirects to this page0
Counted as a content pageYes
Number of subpages of this page0 (0 redirects; 0 non-redirects)

Page protection

EditAllow all users (infinite)
MoveAllow all users (infinite)
DeleteAllow all users (infinite)
View the protection log for this page.

Edit history

Page creatorprefix>Import Bot
Date of page creation21:27, 1 November 2013
Latest editorRobkelk (talk | contribs)
Date of latest edit22:41, 27 January 2016
Total number of edits10
Recent number of edits (within past 180 days)0
Recent number of distinct authors0

Page properties

Magic word (1)
  • __NOTOC__
Transcluded templates (4)

Templates used on this page:

SEO properties

Description

Content

Article description: (description)
This attribute controls the content of the description and og:description elements.
The HAL 9000 is supposedly the most advanced computer and AI available to man yet apparently no one checked how it would act when given conflicting directives? This is the kind of thing they teach you about in undergraduate (if not high-school) level computer science. Didn't the supposed genius Chandra think of this? Does HAL Laboratories even employ a QA team that isn't made up of a bunch of stoned monkeys? Any half-way decent test plan would have caught this. HAL should have been programmed to immediately reject any order which causes this kind of conflict.So, okay, let's say Chandra is an Absent-Minded Professor, and QA somehow missed this obvious bug. So HAL ends up with conflicing directives. His perfectly logical solution to avoid lying to the crew is... to kill them so that he then won't have to lie to them any more. Again, what. Not only does he have to lie to the crew to accomplish this goal in the first place, but his plan fails spectacularly and the entire mission is almost FUBAR'd. The most advanced AI, considered superior to humans in many ways, and this was the best plan he could come up with?! How about, "Hey Dave, Frank, there's something very important I have to tell you. Due to the current mission parameters, I am unable to function effectively until we reach Jupiter. I'm sorry, but I cannot elaborate. I will deactivate myself now. I realise this will put a strain of the mission, but it is vitally important that you do not attempt to reactivate me until we reach our destination. I will be able to explain then. Shutting down..." That would leave the entire crew alive, HAL in perfect working order once Discovery reaches Jupiter, at the cost of loss of the computer for the most uneventful part of the mission - a mere inconvenience.
Information from Extension:WikiSEO