AI Foom Debate: Post 23 – 28

23. Total Nano Domination (Yudkowsky)

What happens when nanotechnology or WBE become possible?

…the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable.

Meaning, that with full-fledged nanotechnology you wouldn’t need a supply chain anymore, you could produce literally every piece of hardware you need. And with WBE the whole thing becomes even more unstable. You wouldn’t need to wait for new ideas or progress made in software and science. If the uploads of one company or country are running faster than everybody else they will have enough time to make astonishing scientific breakthroughs and thus could pull ahead very quickly, since their advantages would accumulate exponentially without relevant information-leakage.

24. Dreams of Autarky (Hanson)

We overestimate our autarky and underestimate our dependence upon others. Many folks e.g. want to restrict trade with other countries, not realizing that this would have largely negative consequences. This bias influences a lot of futuristic thinking about a host of other topics, like space colonies, AI, nanotech, etc.

[Here is] an important common bias on “our” side, i.e., among those who expect specific very large changes. … Futurists tend to expect an unrealistic degree of autarky, or independence, within future technological and social systems.  The cells in our bodies are largely-autonomous devices and manufacturing plants, producing most of what they need internally. … Small tribes themselves were quite autonomous. … Most people are not very aware of, and so have not fully to terms with their new inter-dependence.  For example, people are surprisingly willing to restrict trade between nations, not realizing how much their wealth depends on such trade. … Futurists commonly neglect this interdependence … they picture their future political and economic unit to be the largely self-sufficient small tribe of our evolutionary heritage.

It’s obvious that Yudkowsky’s vision of a highly self-sufficient superintelligent AI and his scenario for creating it (“nine people and a brain in a box in a basement”) are probably influenced by this bias.

25. Total Tech Wars (Hanson)

Hanson complains about the doom-mongering of Yudkowsky and Shulman. Admittedly,  new technologies often bring social disruption. But framing a potential conflict as a total war, is a self-fulfilling prophecy.

These two views can be combined in total tech wars.  The pursuit of some particular tech can be framed as a crucial battle in our war with them; we must not share any of this tech with them, nor tolerate much internal conflict about how to proceed. We must race to get the tech first and retain dominance.

…Yes, you can frame big techs as total tech wars, but surely it is far better than tech transitions not be framed as total wars. The vast majority of conflicts in our society take place within systems of peace and property, where local winners only rarely hurt others much by spending their gains.

…Yes, we must be open to evidence that other powerful groups will treat new techs as total wars.  But we must avoid creating a total war by sloppy discussion of it as a possibility.

Great comment by Yudkowsky:

“I generally refer to this scenario as “winner take all” and had planned a future post with that title.

I’d never have dreamed of calling it a “total tech war” because that sounds much too combative, a phrase that might spark violence even in the near term. It also doesn’t sound accurate, because a winner-take-all scenario doesn’t imply destructive combat or any sort of military conflict.

I moreover defy you to look over my writings, and find any case where I ever used a phrase as inflammatory as “total tech war”.

I think, that in this conversation and in the debate as you have just now framed it, “tu quoque!” is actually justified here.

Anyway – as best as I can tell, the natural landscape of these technologies, which introduces disruptions much larger than farming or the Internet, is without special effort winner-take-all. It’s not a question of ending up in that scenario by making special errors. We’re just there. Getting out of it would imply special difficulty, not getting into it, and I’m not sure that’s possible. — such would be the stance I would try to support.


If you try to look at it from my perspective, then you can see that I’ve gone to tremendous lengths to defuse both the reality and the appearance of conflict between altruistic humans over which AI should be built. “Coherent Extrapolated Volition” is extremely meta; if all competent and altruistic Friendly AI projects think this meta, they are far more likely to find themselves able to cooperate than if one project says “Libertarianism!” and another says “Social democracy!” ” (Emphasis mine)

See especially the last paragraph! Yudkowsky almost admits that CEV is, at least to some degree, motivated by political reasons.

26. Singletons Rule OK (Yudkowsky)

So for me, any satisfactory outcome seems to necessarily involve, if not a singleton, the existence of certain stable global properties upon the future – sufficient to prevent burning the cosmic commons, prevent life’s degeneration into rapacious hardscrapple frontier replication, and prevent supersadists torturing septillions of helpless dolls in private, obscure star systems.

Robin has written about burning the cosmic commons and rapacious hardscrapple frontier existences.  This doesn’t imply that Robin approves of these outcomes.  But Robin’s strong rejection even of winner-take-all language and concepts, seems to suggest that our emotional commitments are something like 180 degrees opposed.  Robin seems to feel the same way about singletons as I feel about Non-singletons.

Yeah, Hanson seems like a fanatical libertarian whose most fundamental values are capitalism, competition and the destruction of all governments. I find it really bizarre that he is afraid of a FAI-singleton but is perfectly fine with “evictions” of millions of ems, the burning of the cosmic commons and rapacious hardscrapple frontier existences.

27. Stuck In Throat (Hanson)

Hanson summarizes Yudkowsky’s core argument:

Sometime in the next few decades a human-level AI will probably be made by having a stupid AI make itself smarter.  Such a process starts very slow and quiet, but eventually “fooms” very fast and then loud. It is likely to go from much stupider to much smarter than humans in less than a week.  While stupid, it can be rather invisible to the world.  Once smart, it can suddenly and without warning take over the world.

The reason an AI can foom so much faster than its society is that an AI can change its basic mental architecture, and humans can’t.  How long any one AI takes to do this depends crucially on its initial architecture.  Current architectures are so bad that an AI starting with them would take an eternity to foom.  Success will come from hard math-like (and Bayes-net-like) thinking that produces deep insights giving much better architectures.

A much smarter than human AI is basically impossible to contain or control; if it wants to it will take over the world, and then it will achieve whatever ends it has.  One should have little confidence that one knows what those ends are from its behavior as a much less than human AI (e.g., as part of some evolutionary competition).  Unless you have carefully proven that it wants what you think it wants, you have no idea what it wants.

In such a situation, if one cannot prevent AI attempts by all others, then the only reasonable strategy is to try to be the first with a “friendly” AI, i.e., one where you really do know what it wants, and where what it wants is something carefully chosen to be as reasonable as possible.

28. Disappointment in the Future (Yudkowsky)

Side post that lists the predictions of Ray Kurzweil. All in all, he’s way too optimistic, but sometimes he does quite well.


This entry was posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity, singularity strategies. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s