Category Archives: FAI

In Praise of Maximizing – With Some Caveats

Most of you are probably familiar with the two contrasting decision making strategies “maximizing” and “satisficing“, but a short recap won’t hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that … Continue reading

Posted in CEV, effective altruism, ethics, FAI, Fundamentals, intelligence explosion, life, transhumanism | Tagged , , , , , , , , , , | 3 Comments

The Thing that I Protect – (Moral) Truth in Fiction?

The Thing That I Protect But still – what is it, then, the thing that I protect? Friendly AI?  No – a thousand times no – a thousand times not anymore.  It’s not thinking of the AI that gives me … Continue reading

Posted in existential risks, FAI, Fun Theory, Lesswrong Zusammenfassungen | Leave a comment

Fun Theory Conclusion: Post 31 – 34

(Mainly quotes. I didn’t want to comment all that much cause I discussed all of these issues before.) 31. Higher Purpose In today’s world, most of the highest-priority legitimate Causes are about large groups of people in extreme jeopardy.  (Wide … Continue reading

Posted in existential risks, FAI, Fun Theory, Fundamentals, Lesswrong Zusammenfassungen | Leave a comment

Fun Theory: Post 24 – 30

24. Building Weirdtopia Yudkowsky invites the readers to write comments describing possible “weirdtopian” futures, i.e. worlds that are neither utopian nor dystopian, but pretty strange. 25. Justified Expectation of Pleasant Surprises We humans need hope. To get up every morning, … Continue reading

Posted in FAI, Fun Theory, Lesswrong Zusammenfassungen | Leave a comment

Fun Theory: Post 20 – 23

20. Emotional Involvement Can your emotions get involved in a video game?  Yes, but not much.  Whatever sympathetic echo of triumph you experience on destroying the Evil Empire in a video game, it’s probably not remotely close to the feeling … Continue reading

Posted in FAI, Fun Theory, Lesswrong Zusammenfassungen | Leave a comment

Fun Theory: Post 14 – 19

14. Amputation of Destiny Yudkowsky describes a book by Ian Banks that is about a society, called the Culture, that consists of happy, intelligent, long-living humans, low-grade transhumanists, so to speak. But everything is controlled by Minds, superintelligent AIs. Yudkowsky … Continue reading

Posted in ethics, FAI, Fun Theory, Joy in the merely Real, Lesswrong Zusammenfassungen | Leave a comment

Fun Theory: Post 11 – 13

11. Nonperson Predicates There is a subproblem of Friendly AI which is so scary that I usually don’t talk about it… …This is the problem that if you create an AI and tell it to model the world around it, … Continue reading

Posted in CEV, ethics, FAI, Fun Theory, Lesswrong Zusammenfassungen, meta-ethics | Leave a comment

Fun Theory: Post 3 – 10

3. Complex Novelty In the book “Permutation City” by Greg Egan, apparently the favorite Sci-Fi book of Yudkowsky one of the main characters, Peer, modifies himself to find table-leg-carving utterly fascinating and enjoyable. Yudkowsky is horrified by this vision and … Continue reading

Posted in FAI, Fun Theory, Joy in the merely Real, Lesswrong Zusammenfassungen | Leave a comment

AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements … Continue reading

Posted in AGI, AI-Foom debate, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Personal, singularity | 1 Comment

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if … Continue reading

Posted in AGI, AI-Foom debate, CEV, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, meta-ethics, singularity, singularity strategies | Leave a comment