-
Recent Posts
- NOTE: This site is old. New site: Wallowinmaya.com
- Stress: How UDT, the Multiverse and Stoicism can Help
- Loneliness and Love in a Darwinian World (a rational critique of PUA)
- Thoughts on Happiness (2) [Happiness Sequence, Part 3]
- Thoughts on Happiness (1) [Happiness Sequence, Part 2]
- Nietzsche, Eternal Return and Loving the Multiverse
- Depression Reveals the Truth: We Live in the Abyss
Archives
- November 2017 (1)
- April 2016 (1)
- July 2015 (1)
- January 2015 (3)
- November 2014 (1)
- October 2014 (1)
- March 2014 (1)
- December 2013 (1)
- August 2013 (1)
- July 2013 (1)
- March 2013 (3)
- February 2013 (2)
- September 2012 (1)
- August 2012 (1)
- April 2012 (41)
- March 2012 (62)
- February 2012 (60)
- January 2012 (114)
- December 2011 (16)
- November 2011 (1)
- Absurdity AGI AI-Foom debate CEV Community Building effective altruism ethics evolutionary psychology existentialism existential risks FAI Freier Wille Fundamentals Fun Theory Happiness Inside View vs. Outside View intelligence explosion Joy in the merely Real Lesswrong Zusammenfassungen life Many Worlds Meta meta-ethics Multiverse Notes from the abyss Occam'sches Rasiermesser Personal Philosophie des Geistes Philosophy Popper psychology Psychotherapie random rationality singularity singularity strategies Uncategorized Wahrscheinlichkeit whiny existentialism Wissenschaftstheorie
Blogroll
Economics
Effective Altruism
Enlightenment through Rationality
Health & Nutrition
Not very PC
Pessimism/Antinatalism
Rational futurism/ Eixstential risks
The Abyss
Transhumanism/ Futurism
Category Archives: intelligence explosion
In Praise of Maximizing – With Some Caveats
Most of you are probably familiar with the two contrasting decision making strategies “maximizing” and “satisficing“, but a short recap won’t hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that … Continue reading
AI Foom Debate: Probability Estimates
[Epistemic note: This is an old post and so not necessarily accurate anymore.] I list some facts that need to be true in order for AI FOOM to be possible. I also add my estimates of how probable these statements … Continue reading
AI Foom Debate: Post 46 – 49
46. Disjunctions, Antipredictions, Etc. (Yudkowsky) First, a good illustration of the conjunction bias by Robyn Dawes: “In their summations lawyers avoid arguing from disjunctions in favor of conjunctions. (There are not many closing arguments that end, “Either the defendant was … Continue reading
AI Foom Debate: Post 41 – 45
41. Shared AI Wins (Hanson) Hanson thinks Yudkowsky’s theories about AI-design are pipe-dreams: The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy. You couldn’t build an effective … Continue reading
AI Foom Debate: Post 35 – 40
35. Underconstrained Abstractions (Yudkowsky) Yudkowsky replies to Hanson’s post “Test Near, Apply Far”. When possible, I try to talk in concepts that can be verified with respect to existing history. …But in my book this is just one trick in … Continue reading
AI Foom Debate: Post 23 – 28
23. Total Nano Domination (Yudkowsky) What happens when nanotechnology or WBE become possible? …the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable. Meaning, … Continue reading
AI Foom Debate: Post 20 – 22
20. …Recursion, Magic (Yudkowsky) Recursion is probably the most difficult part of this topic. We have historical records aplenty of cascades, even if untangling the causality is difficult. Cycles of reinvestment are the heartbeat of the modern economy. An insight … Continue reading
AI Foom Debate: Post 14 – 19
14. Brain Emulation and Hard Takeoff (Carl Shulman) Argues for the possibility of an intelligence explosion inititiated by a billion-dollar-ems-project. 15. Billion Dollar Bots (James Miller) Another scenario of billion-dollar WBE-projects. The problem with all those great Manhattan-style em-projects is, … Continue reading
AI Foom Debate: Post 7 – 10
7. The First World Takeover (Yudkowsky) A really beautiful post about the origin of life from a “optimization-process-perspective”. Before Robin and I move on to talking about the Future, it seems to me wise to check if we have disagreements … Continue reading