Category Archives: singularity strategies

AI Foom Debate Conclusion: Post 50 – 52

50. What Core Argument? (Hanson) Hanson asks again for Yudkowsky’s core argument(s) and lists his objections. Firstly, it must be said that most AI-researchers and growth-economists consider Yudkowksy’s Foom-scenario to be very unlikely. Which of course doesn’t mean much if … Continue reading

Posted in AGI, AI-Foom debate, CEV, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, meta-ethics, singularity, singularity strategies | Leave a comment

AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity, singularity strategies | Leave a comment

496. The Magnitude of His Own Folly

(Interesting discussion about the likelihood of success of FAI.) 496. The Magnitude of His Own Folly Yudkowsky had to finally admit that he could have destroyed the world by building an uFAI. But even that would be too charitable because … Continue reading

Posted in CEV, FAI, Lesswrong Zusammenfassungen, singularity strategies | Leave a comment

483. A Prodigy of Refutation; 484. A Sheer Folly of Callow Youth

483. A Prodigy of Refutation In his reckless youth Yudkowsky made the same mistakes as everyone else when thinking about the FAI-problem. So Eliezer1996 is out to build superintelligence, for the good of humanity and all sentient life. At first, … Continue reading

Posted in CEV, FAI, Fundamentals, Lesswrong Zusammenfassungen, meta-ethics, singularity strategies | Leave a comment