Category Archives: intelligence explosion

In Praise of Maximizing – With Some Caveats

Most of you are probably familiar with the two contrasting decision making strategies “maximizing” and “satisficing“, but a short recap won’t hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that … Continue reading

Posted in CEV, effective altruism, ethics, FAI, Fundamentals, intelligence explosion, life, transhumanism | Tagged , , , , , , , , , , | 3 Comments

AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements … Continue reading

Posted in AGI, AI-Foom debate, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Personal, singularity | 1 Comment

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if … Continue reading

Posted in AGI, AI-Foom debate, CEV, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, meta-ethics, singularity, singularity strategies | Leave a comment

AI Foom Debate: Post 46 – 49

46. Disjunctions, Antipredictions, Etc. (Yudkowsky) First, a good illustration of the conjunction bias by Robyn Dawes: “In their summations lawyers avoid arguing from disjunctions in favor of conjunctions.  (There are not many closing arguments that end, “Either the defendant was … Continue reading

Posted in AGI, AI-Foom debate, CEV, FAI, Fundamentals, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 41 – 45

41. Shared AI Wins (Hanson) Hanson thinks Yudkowsky’s theories about AI-design are pipe-dreams: The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.  You couldn’t build an effective … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 35 – 40

35. Underconstrained Abstractions (Yudkowsky) Yudkowsky replies to Hanson’s post “Test Near, Apply Far”. When possible, I try to talk in concepts that can be verified with respect to existing history. …But in my book this is just one trick in … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity, singularity strategies | Leave a comment

AI Foom Debate: Post 20 – 22

20. …Recursion, Magic (Yudkowsky) Recursion is probably the most difficult part of this topic.  We have historical records aplenty of cascades, even if untangling the causality is difficult.  Cycles of reinvestment are the heartbeat of the modern economy.  An insight … Continue reading

Posted in AGI, AI-Foom debate, FAI, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 14 – 19

14. Brain Emulation and Hard Takeoff (Carl Shulman) Argues for the possibility of an intelligence explosion inititiated by a billion-dollar-ems-project. 15. Billion Dollar Bots (James Miller) Another scenario of billion-dollar WBE-projects. The problem with all those great Manhattan-style em-projects is, … Continue reading

Posted in AI-Foom debate, existential risks, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 7 – 10

7. The First World Takeover (Yudkowsky) A really beautiful post about the origin of life from a “optimization-process-perspective”. Before Robin and I move on to talking about the Future, it seems to me wise to check if we have disagreements … Continue reading

Posted in AGI, AI-Foom debate, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment