AI Foom Debate: Post 32 – 34

32. Hard Takeoff (Yudkowsky)

Natural selection produced roughly linear improvements in human brains. Unmodified human brains produced roughly exponential improvements in knowledge on the object level (bridges, planes, cars, etc ).

So it’s unlikely that with the advent of recursively self-improving superintelligence the speed of progress won’t change much.

…to try and compress it down to a slogan that fits on a T-Shirt – not that I’m saying this is a good idea – “Moore’s Law is exponential now; it would be really odd if it stayed exponential with the improving computers doing the research.

Furthermore, the vast differences in general intelligence between chimpanzees and humans were produced by a process as stupid and slow as evolution. The human brain is only 3 times as large as the chimpanzee brain which suggests that small increases in hardware can result in huge increases in intelligence. There weren’t any diminishing returns or speed bumps between chimpanzees and humans, so why should there be some between the human level and the “IQ-10000”-level?

But there is yet another reason that makes hard-takeoff scenarios rather likely: Resource overhang. The agricultural revolution accelerated the velocity of progress not because it fundamentally changed the design of the human brain but merely because there were more humans around, which allowed for specialization and a larger pool of knowledge and ideas.

Think of a uranium pile.  It’s always running the same “algorithm” with respect to neutrons causing fissions that produce further neutrons, but just piling on more uranium can cause it to go from subcritical to supercritical, as any given neutron has more uranium to travel through and a higher chance of causing future fissions.

Hardware overhang part 1 is the internet:

…just the resource bonanza represented by “eating the Internet” or “discovering an application for which there is effectively unlimited demand, which lets you rent huge amounts of computing power while using only half of it to pay the bills” – even though this event isn’t particularly recursive of itself, just an object-level fruit-taking – could potentially drive the AI from subcritical to supercritical.

Hardware overhang part 2: Modern CPUs have over 2GHz serial speed, whereas neurons have only 100Hz (although largely through parallel computation). Thus, with the right AI-theory you can produce human-level intelligence with 100Hz, and you could simply speed this up by a factor of 10 million, if you used modern hardware.

On to the topic of insight, another potential source of discontinuity:

The course of hominid evolution was driven by evolution’s neighborhood search; if the evolution of the brain accelerated to some degree, this was probably due to existing adaptations creating a greater number of possibilities for further adaptations.

But an AI that is genuinely intelligent could explore the search space much more efficiently and could therefore avoid unusually steep and difficult regions of the optimization slope and travel through easier ones, so to speak.

In other words, when the AI becomes smart enough to do AI theory, that’s when I expect it to fully swallow its own optimization chain and for the real FOOM to occur – though the AI might reach this point as part of a cascade that started at a more primitive level.

33. Test Near, Apply Far (Hanson)

Nice post by Hanson. Theorizing is easy, but you have to actually experiment and test your theories in order to see if they are any good.

It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions.  To see if such things are useful, we need to vet them, and that is easiest “nearby”, where we know a lot.

…We should be wary of inventing new abstractions just to make sense of things far; we should insist they first show their value nearby.

Which sounds great, but I don’t know how you could possibly test your theories about WBE or superintelligence, except of course by actually creating them, but then it’s already too late.

34. Permitted Possibilities, & Locality (Yudkowsky)

Yudkowsky lists some major scenarios and his estimated probabilities thereof.

An unfriendly AI undergoing Foom is the most likely scenario. A fooming FAI is the prefered albeit most unlikely one and an AI that achieves average human intelligence but can’t self-improve any further maybe due to bad design or something is not likely according to Yudkowsky although he can’t rule it out.

These two scenarios shouldn’t happen:

– An AI becomes smarter and smarter up to high human intelligence (say IQ 170 or so) but is unable to progress any further. For this to be possible the optimization slope would have to be very steep.

– An AI achieves slightly transhuman intelligence (say IQ 250 or so) and continues to becomes smarter, but only slowly.

Yudkowsky then describes the “economy of mind”-scenario (the most likely AI-dominated scenario according to Hanson):

  • No one AI that does everything humans do, but rather a large, diverse population of AIs.  These AIs have various domain-specific competencies that are “human+ level” – not just in the sense of Deep Blue beating Kasparov, but in the sense that in these domains, the AIs seem to have good “common sense” and can e.g. recognize, comprehend and handle situations that weren’t in their original programming.  But only in the special domains for which that AI was crafted/trained.  Collectively, these AIs may be strictly more competent than any one human, but no individual AI is more competent than any one human.
  • Knowledge and even skills are widely traded in this economy of AI systems.
  • In concert, these AIs, and their human owners, and the economy that surrounds them, undergo a collective FOOM of self-improvement.  No local agent is capable of doing all this work, only the collective system.
  • The FOOM’s benefits are distributed through a whole global economy of trade partners and suppliers, including existing humans and corporations, though existing humans and corporations may form an increasingly small fraction of the New Economy.
  • This FOOM looks like an exponential curve of compound interest, like the modern world but with a substantially shorter doubling time.

He thinks this scenario is also rather unlikely because different AI architectures probably can’t easily trade knowledge or skills since their source codes/mind-architectures are so different. And trading with uploads will be even more difficult because they are error-prone and run on spaghetti-code that was never designed to be optimized. Exchanging knowledge with biological humans will, for obvious reasons, be nigh impossible.

These, among other things, are the main reasons for localization: One fooming AI with one mind-architecture and one utility-function, not numerous different AIs that engage in trade and create a flourishing economy or something like that.

This entry was posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, Lesswrong Zusammenfassungen, singularity. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s