Category Archives: Inside View vs. Outside View

AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements … Continue reading

Posted in AGI, AI-Foom debate, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Personal, singularity | 1 Comment

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if … Continue reading

Posted in AGI, AI-Foom debate, CEV, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, meta-ethics, singularity, singularity strategies | Leave a comment

AI Foom Debate: Post 41 – 45

41. Shared AI Wins (Hanson) Hanson thinks Yudkowsky’s theories about AI-design are pipe-dreams: The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.  You couldn’t build an effective … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity, singularity strategies | Leave a comment

AI Foom Debate: Post 7 – 10

7. The First World Takeover (Yudkowsky) A really beautiful post about the origin of life from a “optimization-process-perspective”. Before Robin and I move on to talking about the Future, it seems to me wise to check if we have disagreements … Continue reading

Posted in AGI, AI-Foom debate, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

393. Surface Analogies and Deep Causes – 394. Optimization and the Singularity

393. Surface Analogies and Deep Causes Yudkowsky warnt vor der Außenansicht. Nur weil Dinge prima facie einige Ähnlichkeiten aufweisen, bedeutet das nicht, dass sie auch in anderer Hinsicht gleich sind. Besser ist es, die kausale Struktur der Phänomene zu entschlüsseln, … Continue reading

Posted in Inside View vs. Outside View, Lesswrong Zusammenfassungen | Leave a comment

392. The Outside View’s Domain

392. The Outside View’s Domain Wann sollte man die “Außenannsicht” (Outside-View), wann die “Innenansicht” (Inside-View) anwenden? Die Außenansicht scheint gut geeignet, um z.B. Planungs-Fehler zu korrigieren. Wenn ein Student beispielsweise sagt, dass er diesmal ganz sicher seine Hausaufgaben pünktlich abgeben … Continue reading

Posted in Inside View vs. Outside View, Lesswrong Zusammenfassungen | Tagged | Leave a comment