Eliezer Yudkowsky: Difference between revisions

Everything About Fiction You Never Wanted to Know.
Content added Content deleted
(Rescuing 2 sources and tagging 0 as dead. #IABot (v2.0beta14))
 
(11 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{creator}}
{{creator}}
[[File:Eliezer Yudkowsky, Stanford 2006 (square crop).jpg|thumb|300px|Eliezer Yudkowsky in 2006.]]
{{quote|''Let me say that Eliezer may have already done more to save the world than most people in history.''|From a [http://lesswrong.com//lw/2lr/the_importance_of_selfdoubt/2ia7?c=1 blog comment].}}
{{quote|''Let me say that Eliezer may have already done more to save the world than most people in history.''|From a [http://lesswrong.com//lw/2lr/the_importance_of_selfdoubt/2ia7?c{{=}}1 blog comment].}}
{{quote|Rationalism is the belief that Eliezer Yudkowsky is the rightful caliph.|Scott Alexander|[http://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/ The Ideology is Not the Movement]}}


Author of [[Three Worlds Collide]] and [[Harry Potter and The Methods of Rationality]], the shorter works [[Haruhi Suzumiya/Fanfic Recs|Trust in God/The Riddle of Kyon]] and [[Mega Crossover/Fanfic Recs|The Finale of the Ultimate Meta Mega Crossover]], and [http://yudkowsky.net/other/fiction various other fiction]. Helped found and is probably the most prolific poster to the blog/discussion site [[Less Wrong]], and is currently writing a book on rationality.
Author of ''[[Three Worlds Collide]]'' and ''[[Harry Potter and the Methods of Rationality]]'', the shorter works ''[[Haruhi Suzumiya/Fanfic Recs|Trust in God/The Riddle of Kyon]]'' and ''[[Mega Crossover/Fanfic Recs|The Finale of the Ultimate Meta Mega Crossover]]'', and [http://yudkowsky.net/other/fiction various other fiction]. Helped found and is probably the most prolific poster to the blog/discussion site [[Less Wrong]] (the name of which he has used as a pseudonym on occasion), and is currently writing a book on rationality.


His day job is with the [http://singinst.org/ Singularity Institute for Artificial Intelligence], specifically working on how to make an [[AI]] that ''won't'' just [http://singinst.org/riskintro/index.html kill us all accidentally], and make [[AI Is a Crapshoot|AI less of a crapshoot]].
His day job is with the [https://web.archive.org/web/20120616070928/http://singinst.org/ Singularity Institute for Artificial Intelligence], specifically working on how to make an [[AI]] that ''won't'' just [https://web.archive.org/web/20110621172641/http://singinst.org/riskintro/index.html kill us all accidentally], and make [[A.I. Is a Crapshoot|AI less of a crapshoot]].


Most of his fictional works are [[Author Tract|Author Tracts]], albeit ones that [[Rule of Cautious Editing Judgment|many find]] good and entertaining.
Most of his fictional works are [[Author Tract]]s, albeit ones that [[Rule of Cautious Editing Judgment|many find]] good and entertaining.


Occasionally [[One of Us|drops by our very own forums]].
----
----
{{tropelist|Tropes describing Yudkowsky or his work:}}
{{creatortropes|Tropes describing Yudkowsky or his work:}}
* [[Atheism|Atheist]]: Doesn't mind calling religion "insanity", but believes in promoting [http://lesswrong.com/lw/1e/raising_the_sanity_waterline/ rational thinking in general] and letting atheism follow.
* [[Atheism|Atheist]]: Doesn't mind calling religion "insanity", but believes in promoting [http://lesswrong.com/lw/1e/raising_the_sanity_waterline/ rational thinking in general] and letting atheism follow.
* [[Badass Boast]]: [http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ "Unfortunately the universe doesn't agree with me. We'll see which one of us is still standing when this is over."]
* [[Badass Boast]]: [http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ "Unfortunately the universe doesn't agree with me. We'll see which one of us is still standing when this is over."]
Line 19: Line 20:
* [[The End of the World as We Know It]]: Seeks to prevent it.
* [[The End of the World as We Know It]]: Seeks to prevent it.
* [[Fan Boy]]: Isn't one, but has them despite trying to [http://lesswrong.com/lw/ln/resist_the_happy_death_spiral/ warn against being one].
* [[Fan Boy]]: Isn't one, but has them despite trying to [http://lesswrong.com/lw/ln/resist_the_happy_death_spiral/ warn against being one].
* [[Genre Adultery]]: Almost all of his works, whether [[Harry Potter and The Methods of Rationality (Fanfic)|fanfic]] or [[Three Worlds Collide|original]], are highly philosophical [[Author Tract|Author Tracts]]. And then there's ''[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]'', which is merely {{spoiler|a ''[[Calvin and Hobbes]]'' fic}} and parody of ''[[The Terminator]]''... with no philosophical elements whatsoever.
* [[Genre Adultery]]: Almost all of his works, whether [[Harry Potter and the Methods of Rationality|fanfic]] or [[Three Worlds Collide|original]], are highly philosophical [[Author Tract|Author Tracts]]. And then there's ''[http://www.fanfiction.net/s/5731071/1/Peggy_Susie Peggy Susie]'', which is merely {{spoiler|a ''[[Calvin and Hobbes]]'' fic}} and parody of ''[[The Terminator]]''... with no philosophical elements whatsoever.
* [[Hannibal Lecture]]: In the [http://yudkowsky.net/singularity/aibox AI-Box] [http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/ experiment], presumably.
* [[Hannibal Lecture]]: In the [http://yudkowsky.net/singularity/aibox AI-Box] [http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/ experiment], presumably.
* [[Human Popsicle]]: His membership with the [http://www.cryonics.org/ Cryonics Institute] is his backup plan if he dies before achieving proper immortality.
* [[Human Popsicle]]: His membership with the [http://www.cryonics.org/ Cryonics Institute] is his backup plan if he dies before achieving proper immortality.
Line 26: Line 27:
* [[Jewish and Nerdy]]
* [[Jewish and Nerdy]]
* [[Manipulative Bastard]]: His explanation of [http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/nxw how he could "win" the AI-Box experiment].
* [[Manipulative Bastard]]: His explanation of [http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/nxw how he could "win" the AI-Box experiment].
{{quote| Part of the ''real'' reason that I wanted to run the original AI-Box Experiment, is that I thought I had an ability that I could never test in real life. Was I really making a sacrifice for my ethics, or just overestimating my own ability? The AI-Box Experiment let me test that.}}
{{quote|Part of the ''real'' reason that I wanted to run the original AI-Box Experiment, is that I thought I had an ability that I could never test in real life. Was I really making a sacrifice for my ethics, or just overestimating my own ability? The AI-Box Experiment let me test that.}}
* [[Measuring the Marigolds]]: Criticized and averted; he wants people to be able to see [http://lesswrong.com/lw/or/joy_in_the_merely_real/ joy in the 'merely' real world].
* [[Measuring the Marigolds]]: Criticized and averted; he wants people to be able to see [http://lesswrong.com/lw/or/joy_in_the_merely_real/ joy in the 'merely' real world].
* [[Old Shame]]: The document [http://lesswrong.com/lw/yd/the_thing_that_i_protect/qze Creating Friendly AI], along with much of what he wrote before fully [http://lesswrong.com/lw/ue/the_magnitude_of_his_own_folly/ recognizing the dangers of AI].
* [[Old Shame]]: The document [http://lesswrong.com/lw/yd/the_thing_that_i_protect/qze Creating Friendly AI], along with much of what he wrote before fully [http://lesswrong.com/lw/ue/the_magnitude_of_his_own_folly/ recognizing the dangers of AI].
Line 38: Line 39:
* [[Transhumanism]]: He is a transhumanist.
* [[Transhumanism]]: He is a transhumanist.
* [[True Art Is Angsty]]: [http://lesswrong.com/lw/xi/serious_stories/ Discussed/invoked]:
* [[True Art Is Angsty]]: [http://lesswrong.com/lw/xi/serious_stories/ Discussed/invoked]:
{{quote| ''In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: ''[[The Iliad]]'', ''[[Romeo and Juliet]]'', ''[[The Godfather]]'', ''[[Watchmen]]'', ''[[Planescape: Torment]]'', the second season of ''[[Buffy the Vampire Slayer]]'', or '''that''' ending in ''[[Tsukihime]]''. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria.''}}
{{quote|''In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: ''[[The Iliad]]'', ''[[Romeo and Juliet]]'', ''[[The Godfather]]'', ''[[Watchmen]]'', ''[[Planescape: Torment]]'', the second season of ''[[Buffy the Vampire Slayer]]'', or '''that''' ending in ''[[Tsukihime]]''. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria.''}}
* [[We Do the Impossible]]: [http://lesswrong.com/lw/65/money_the_unit_of_caring/ "I specialize...in the impossible questions business."]. See also [http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible].
* [[We Do the Impossible]]: [http://lesswrong.com/lw/65/money_the_unit_of_caring/ "I specialize...in the impossible questions business."]. See also [http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible].
* [[The World Is Just Awesome]]
* [[The World Is Just Awesome]]
Line 44: Line 45:
{{tropelist|Tropes related to Yudkowsky's writing:}}
{{tropelist|Tropes related to Yudkowsky's writing:}}
* [[Disobey This Message]]: Discussed in a number of posts.
* [[Disobey This Message]]: Discussed in a number of posts.
{{quote| ...if you think you would ''totally'' wear that [http://lesswrong.com/lw/mb/lonely_dissent/ clown suit], then don't be too proud of that either! It just means that you need to make an effort in the ''opposite'' direction to avoid dissenting too easily.}}
{{quote|...if you think you would ''totally'' wear that [http://lesswrong.com/lw/mb/lonely_dissent/ clown suit], then don't be too proud of that either! It just means that you need to make an effort in the ''opposite'' direction to avoid dissenting too easily.}}
* [[Drugs Are Bad]]: Averted -- Although Yudkowsky apparently [http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r80?c=1 doesn't want] to try mind-altering drugs, he is [http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/ in favor] of drug legalization.
* [[Drugs Are Bad]]: Averted -- Although Yudkowsky apparently [http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r80?c=1 doesn't want] to try mind-altering drugs, he is [http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/ in favor] of drug legalization.
* [[Holding Out for a Hero]]: [http://lesswrong.com/lw/mc/to_lead_you_must_stand_up/ To Lead, You Must Stand Up], [http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible].
* [[Holding Out for a Hero]]: [http://lesswrong.com/lw/mc/to_lead_you_must_stand_up/ To Lead, You Must Stand Up], [http://lesswrong.com/lw/un/on_doing_the_impossible/ On Doing the Impossible].
Line 55: Line 56:
{{reflist}}
{{reflist}}
[[Category:Mega Crossover/Fanfic Recs]]
[[Category:Mega Crossover/Fanfic Recs]]
[[Category:Fan Fics (Fanfic)]]
[[Category:Authors]]
[[Category:Authors]]
[[Category:Eliezer Yudkowsky]]
[[Category:Eliezer Yudkowsky]]
[[Category:Fanfic Authors]]

Latest revision as of 07:12, 12 May 2019

/wiki/Eliezer Yudkowskycreator
Eliezer Yudkowsky in 2006.
Let me say that Eliezer may have already done more to save the world than most people in history.
—From a blog comment.
Rationalism is the belief that Eliezer Yudkowsky is the rightful caliph.

Author of Three Worlds Collide and Harry Potter and the Methods of Rationality, the shorter works Trust in God/The Riddle of Kyon and The Finale of the Ultimate Meta Mega Crossover, and various other fiction. Helped found and is probably the most prolific poster to the blog/discussion site Less Wrong (the name of which he has used as a pseudonym on occasion), and is currently writing a book on rationality.

His day job is with the Singularity Institute for Artificial Intelligence, specifically working on how to make an AI that won't just kill us all accidentally, and make AI less of a crapshoot.

Most of his fictional works are Author Tracts, albeit ones that many find good and entertaining.


Tropes describing Yudkowsky or his work:

Part of the real reason that I wanted to run the original AI-Box Experiment, is that I thought I had an ability that I could never test in real life. Was I really making a sacrifice for my ethics, or just overestimating my own ability? The AI-Box Experiment let me test that.

In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: The Iliad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria.

Tropes related to Yudkowsky's writing:

...if you think you would totally wear that clown suit, then don't be too proud of that either! It just means that you need to make an effort in the opposite direction to avoid dissenting too easily.