Category Archives: existential risks

The Thing that I Protect – (Moral) Truth in Fiction?

The Thing That I Protect But still – what is it, then, the thing that I protect? Friendly AI?  No – a thousand times no – a thousand times not anymore.  It’s not thinking of the AI that gives me … Continue reading

Posted in existential risks, FAI, Fun Theory, Lesswrong Zusammenfassungen | Leave a comment

Fun Theory Conclusion: Post 31 – 34

(Mainly quotes. I didn’t want to comment all that much cause I discussed all of these issues before.) 31. Higher Purpose In today’s world, most of the highest-priority legitimate Causes are about large groups of people in extreme jeopardy.  (Wide … Continue reading

Posted in existential risks, FAI, Fun Theory, Fundamentals, Lesswrong Zusammenfassungen | Leave a comment

AI Foom Debate: Probability Estimates

[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements … Continue reading

Posted in AGI, AI-Foom debate, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Personal, singularity | 1 Comment

AI Foom Debate: Post 41 – 45

41. Shared AI Wins (Hanson) Hanson thinks Yudkowsky’s theories about AI-design are pipe-dreams: The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.  You couldn’t build an effective … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 35 – 40

35. Underconstrained Abstractions (Yudkowsky) Yudkowsky replies to Hanson’s post “Test Near, Apply Far”. When possible, I try to talk in concepts that can be verified with respect to existing history. …But in my book this is just one trick in … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 32 – 34

32. Hard Takeoff (Yudkowsky) Natural selection produced roughly linear improvements in human brains. Unmodified human brains produced roughly exponential improvements in knowledge on the object level (bridges, planes, cars, etc ). So it’s unlikely that with the advent of recursively … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 29 – 31

29. I Heart CYC (Hanson) Hanson endorses CYC, an AI-project headed by Doug Lenat, the inventor of EURISKO. The lesson Lenat took from EURISKO is that architecture is overrated;  AIs learn slowly now mainly because they know so little.  So … Continue reading

Posted in AGI, AI-Foom debate, existential risks, FAI, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, … Continue reading

Posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity, singularity strategies | Leave a comment

AI Foom Debate: Post 14 – 19

14. Brain Emulation and Hard Takeoff (Carl Shulman) Argues for the possibility of an intelligence explosion inititiated by a billion-dollar-ems-project. 15. Billion Dollar Bots (James Miller) Another scenario of billion-dollar WBE-projects. The problem with all those great Manhattan-style em-projects is, … Continue reading

Posted in AI-Foom debate, existential risks, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment

AI-Foom Debate: Post 1 – 6

This is one of the most important Sequences for me, I’ll depart from the usual format. Prologue 1. Fund UberTool? (Robin Hanson) Hanson offers a nice analogy of a recursively self-improving AI in economic terms, but he doesn’t really argue … Continue reading

Posted in AGI, AI-Foom debate, existential risks, intelligence explosion, Lesswrong Zusammenfassungen, singularity | Leave a comment