Category Archives: Fundamentals

Thoughts on Happiness (2) [Happiness Sequence, Part 3]

[Previously: Happy by Habit, Thoughts on Happiness (1)] 9. Seeing the positive Stupid and/or irrational people can really annoy me. Someone just has to say that “evolutionary psychology is biologistic” and my day is ruined. The fact is that the irrationality, overconfidence and … Continue reading

Posted in existentialism, Freier Wille, Fundamentals, Happiness, Joy in the merely Real, life, Many Worlds, Meditation, Multiverse, Personal, Philosophy, psychology, Psychotherapie | Tagged , , , , , , , | 1 Comment

Thoughts on Happiness (1) [Happiness Sequence, Part 2]

[Previously: Happy by Habit] This is a collection of thoughts on how to become happier. The first 2 parts are mostly focused on cognitive habits that I’ve found useful. That means I’m not talking about obvious stuff like regular exercise, good … Continue reading

Posted in CEV, existentialism, Fundamentals, Happiness, Joy in the merely Real, life, Many Worlds, Multiverse, Personal, Philosophy, Psychotherapie, rationality, singularity, whiny existentialism | Tagged , , , , , , , | 5 Comments

Nietzsche, Eternal Return and Loving the Multiverse

Many of you will probably think: “Come on, Nietzsche?!” I know, I know. But I have holidays and a pretty smart, rational person recommended Nietzsche to me in order to overcome my existential angst. I won’t bore you with the … Continue reading

Posted in Continental philosophy, existentialism, Fundamentals, Joy in the merely Real, life, Multiverse, Personal, Philosophy | Tagged , , , , , , , , | 6 Comments

In Praise of Maximizing – With Some Caveats

Most of you are probably familiar with the two contrasting decision making strategies “maximizing” and “satisficing“, but a short recap won’t hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that … Continue reading

Posted in CEV, effective altruism, ethics, FAI, Fundamentals, intelligence explosion, life, transhumanism | Tagged , , , , , , , , , , | 3 Comments

The Craft and the Community: Post 11 – 15

11. Church vs. Taskforce Yudkowsky elaborates on the last post and advocates the establishment of rationalist task-forces and communities. (With task-forces he means small and maybe only short-lived communities that were created for a specific reason.) Yeah, a genuine rational … Continue reading

Posted in Community Building, effective altruism, ethics, Fundamentals, Lesswrong Zusammenfassungen | Leave a comment

The Craft and the Community: Post 1 – 2

1. Raising the Sanity Waterline Even if we eliminated harmful and insane practices and beliefs like e.g. drug-criminalization or the Blank Slate Dogma (Yudkowsky bashes religion, I however really love to bash those two, as you may have noticed by … Continue reading

Posted in ethics, Fundamentals, Lesswrong Zusammenfassungen | 2 Comments

Fun Theory Conclusion: Post 31 – 34

(Mainly quotes. I didn’t want to comment all that much cause I discussed all of these issues before.) 31. Higher Purpose In today’s world, most of the highest-priority legitimate Causes are about large groups of people in extreme jeopardy.  (Wide … Continue reading

Posted in existential risks, FAI, Fun Theory, Fundamentals, Lesswrong Zusammenfassungen | Leave a comment

Fun Theory: Post 1 – 2

1. Prolegomena to a Theory of Fun Raise the topic of cryonics, uploading, or just medically extended lifespan/healthspan, and some bioconservative neo-Luddite is bound to ask, in portentous tones: “But what will people do all day?” They don’t try to … Continue reading

Posted in Fun Theory, Fundamentals, Joy in the merely Real, Lesswrong Zusammenfassungen, meta-ethics | 2 Comments

AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements … Continue reading

Posted in AGI, AI-Foom debate, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Personal, singularity | 1 Comment

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if … Continue reading

Posted in AGI, AI-Foom debate, CEV, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, meta-ethics, singularity, singularity strategies | Leave a comment