Category Archives: CEV

Thoughts on Happiness (1) [Happiness Sequence, Part 2]

[Previously: Happy by Habit] This is a collection of thoughts on how to become happier. The first 2 parts are mostly focused on cognitive habits that I’ve found useful. That means I’m not talking about obvious stuff like regular exercise, good … Continue reading

Posted in CEV, existentialism, Fundamentals, Happiness, Joy in the merely Real, life, Many Worlds, Multiverse, Personal, Philosophy, Psychotherapie, rationality, singularity, whiny existentialism | Tagged , , , , , , , | 5 Comments

In Praise of Maximizing – With Some Caveats

Most of you are probably familiar with the two contrasting decision making strategies “maximizing” and “satisficing“, but a short recap won’t hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that … Continue reading

Posted in CEV, effective altruism, ethics, FAI, Fundamentals, intelligence explosion, life, transhumanism | Tagged , , , , , , , , , , | 3 Comments

3. Epistemic Viciousness – 6. Tolerate Tolerance

3. Epistemic Viciousness Many martial arts gurus would totally lose in real fights. Some of the reasons why this can happen: The art, the dojo, and the sensei are seen as sacred.  “Having red toe-nails in the dojo is like … Continue reading

Posted in CEV, ethics, Lesswrong Zusammenfassungen, Personal | 12 Comments

Fun Theory: Post 11 – 13

11. Nonperson Predicates There is a subproblem of Friendly AI which is so scary that I usually don’t talk about it… …This is the problem that if you create an AI and tell it to model the world around it, … Continue reading

Posted in CEV, ethics, FAI, Fun Theory, Lesswrong Zusammenfassungen, meta-ethics | Leave a comment

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if … Continue reading

Posted in AGI, AI-Foom debate, CEV, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, meta-ethics, singularity, singularity strategies | Leave a comment

AI Foom Debate: Post 46 – 49

46. Disjunctions, Antipredictions, Etc. (Yudkowsky) First, a good illustration of the conjunction bias by Robyn Dawes: “In their summations lawyers avoid arguing from disjunctions in favor of conjunctions.  (There are not many closing arguments that end, “Either the defendant was … Continue reading

Posted in AGI, AI-Foom debate, CEV, FAI, Fundamentals, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 41 – 45

41. Shared AI Wins (Hanson) Hanson thinks Yudkowsky’s theories about AI-design are pipe-dreams: The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.  You couldn’t build an effective … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 35 – 40

35. Underconstrained Abstractions (Yudkowsky) Yudkowsky replies to Hanson’s post “Test Near, Apply Far”. When possible, I try to talk in concepts that can be verified with respect to existing history. …But in my book this is just one trick in … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 32 – 34

32. Hard Takeoff (Yudkowsky) Natural selection produced roughly linear improvements in human brains. Unmodified human brains produced roughly exponential improvements in knowledge on the object level (bridges, planes, cars, etc ). So it’s unlikely that with the advent of recursively … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity, singularity strategies | Leave a comment