Eliezer Yudkowsky

"Let me say that Eliezer may have already done more to save the world than most people in history."

- From a blog comment.

"Rationalism is the belief that Eliezer Yudkowsky is the rightful caliph."

- Scott Alexander

Author of Three Worlds Collide and Harry Potter and The Methods of Rationality, the shorter works Trust in God/The Riddle of Kyon and The Finale of the Ultimate Meta Mega Crossover, and various other fiction. Helped found and is probably the most prolific poster to the blog/discussion site Less Wrong (the name of which he has used as a pseudonym on occasion), and is currently writing a book on rationality.

His day job is with the Singularity Institute for Artificial Intelligence, specifically working on how to make an AI that won't just kill us all accidentally, and make AI less of a crapshoot.

Most of his fictional works are Author Tracts, albeit ones that many find good and entertaining.

"Part of the real reason that I wanted to run the original AI-Box Experiment, is that I thought I had an ability that I could never test in real life. Was I really making a sacrifice for my ethics, or just overestimating my own ability? The AI-Box Experiment let me test that."
 * Atheist: Doesn't mind calling religion "insanity", but believes in promoting rational thinking in general and letting atheism follow.
 * Badass Boast: "Unfortunately the universe doesn't agree with me. We'll see which one of us is still standing when this is over."
 * Bisexual: Is straight, but would take a pill that would make him bisexual because it would raise his ability to have fun.
 * Child Prodigy: He was one.
 * Dead Little Brother: Sort of; His little brother died at the age of 19, but although it may have made Yudkowsky's search for immortality more personal, he was already an atheist, immortalist and an advocate for cryonics before this tragic incident.
 * Eats Babies: Presented without comment.
 * The End of the World as We Know It: Seeks to prevent it.
 * Fan Boy: Isn't one, but has them despite trying to warn against being one.
 * Genre Adultery: Almost all of his works, whether fanfic or original, are highly philosophical Author Tracts. And then there's Peggy Susie, which is merely  and parody of The Terminator... with no philosophical elements whatsoever.
 * Hannibal Lecture: In the AI-Box experiment, presumably.
 * Human Popsicle: His membership with the Cryonics Institute is his backup plan if he dies before achieving proper immortality.
 * Immortality Seeker: Though he hopes never to die at all, in case he does, he wants to be cryonically frozen and then reanimated.
 * Insufferable Genius: Can come across as this.
 * Jewish and Nerdy
 * Manipulative Bastard: His explanation of how he could "win" the AI-Box experiment.

"In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: The Iliad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime. Is there a single story on the list that isn't tragic? Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise - but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? [...] You simply don't optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria."
 * Measuring the Marigolds: Criticized and averted; he wants people to be able to see joy in the 'merely' real world.
 * Old Shame: The document Creating Friendly AI, along with much of what he wrote before fully recognizing the dangers of AI.
 * It's still possible to find some really awful fiction that he wrote on UseNet many years ago. (No, I won't link to it.)
 * One of Us: Partakes from time to time in discussions here, usually regarding his own writing.
 * Protectorate: The Thing That I Protect.
 * Scale of Scientific "Sins": Plans to commit most of them, but backs away from creating life.
 * Science Hero: Openly uses the word "hero" to describe himself.
 * The Singularity: He is convinced of a coming technological singularity is determined to help bring about a positive singularity regardless of the uncertainty of success.
 * Talking Your Way Out: The AI-box experiment.
 * Transhumanism: He is a transhumanist.
 * True Art Is Angsty: Discussed/invoked:


 * We Do the Impossible: "I specialize...in the impossible questions business.". See also On Doing the Impossible.
 * The World Is Just Awesome

"...if you think you would totally wear that clown suit, then don't be too proud of that either! It just means that you need to make an effort in the opposite direction to avoid dissenting too easily."
 * Disobey This Message: Discussed in a number of posts.


 * Drugs Are Bad: Averted -- Although Yudkowsky apparently doesn't want to try mind-altering drugs, he is in favor of drug legalization.
 * Holding Out for a Hero: To Lead, You Must Stand Up, On Doing the Impossible.
 * Literal Genie: What an improperly designed AI would be.
 * Living Forever Is Awesome: He definitely wants nobody to die anymore.
 * The Multiverse: The quantum-physics version is discussed in detail in a sequence of posts; For the People Who Are Still Alive takes a still broader view.
 * Transhuman Treachery: Discussed in Growing Up is Hard as one reason to develop AI before human enhancement and Brain Uploading.
 * The World Is Not Ready