AI Foom Debate: Post 7 – 10

7. The First World Takeover (Yudkowsky)

A really beautiful post about the origin of life from a “optimization-process-perspective”.

Before Robin and I move on to talking about the Future, it seems to me wise to check if we have disagreements in our view of the Past.

The first 9 Billion years after the Big Bang nothing much happened. Some stars formed, later some planets and some stars died.

It was the Age of Boredom.

The hallmark of the Age of Boredom was not lack of natural resources – it wasn’t that the universe was low on hydrogen – but, rather, the lack of any cumulative search.

…But vastly more important, in the scheme of things, was this – that the first replicator made copies of itself, and some of those copies were errors.

That is, it explored the neighboring regions of the search space – some of which contained better replicators – and then those replicators ended up with more probability flowing into them, which explored their neighborhoods.

…Stars begot planets, planets begot tidal pools.  But that’s not the same as a replicator begetting a replicator – it doesn’t search a neighborhood, find something that better matches a criterion (in this case, the criterion of effective replication) and then search that neighborhood, over and over.

The Age of Boredom ended with the first replicator.

The first replicator changed the course of the whole universe. From our standpoint it isn’t very complex, but it was the first, if very simple, optimization process whereas the rest of the universe didn’t optimize anything.

….The first replicator was the first great break in History – the first Black Swan that would have been unimaginable by any surface analogy.  No extrapolation of previous trends could have spotted it – you’d have had to dive down into causal modeling, in enough detail to visualize the unprecedented search.

…That first replicator took over the world – in what sense?  Earth’s crust, Earth’s magma, far outweighs its mass of Life.  But Robin and I both suspect, I think, that the fate of the universe, and all those distant stars that outweigh us, will end up shaped by Life.

…How?  How did the first replicating pattern take over the world? Why didn’t all those other molecules get an equal vote in the process?

Well, that initial replicating pattern was doing some kind of search – some kind of optimization – and nothing else in the Universe was even trying. Really it was evolution that took over the world, not the first replicating pattern per se – you don’t see many copies of it around any more.

…And that was the story of the First World Takeover, when a shift in the structure of optimization – namely, moving from no optimization whatsoever, to natural selection – produced a stark discontinuity with previous trends; and squeezed the flow of the whole universe’s destiny through the needle’s eye of a single place and time and pattern.

8. Abstraction, Not Analogy (Hanson)

Hanson thinks the term “surface analogy” is misleading. Better would be “abstraction”. There are useful and not so useful abstractions. If you take e.g. a hammer you can abstract relevant features depending on your goals. If you want to buy one, you should ask for its price. If you want to know if its safe to throw, you should try to find out its weight, density, shape etc.

When reasoning about the coming singularity Hanson uses e.g. such abstractions as “changes in economic growth rates”, whereas Yudkowsky makes heavy use of his concept of “optimization processes” but neither is more superficial than the other one (according to Hanson).

9. Whence Your Abstractions? (Yudkowsky)

Hanson and Yudkowsky just have very different modes of thinking:

Analogizing Doug Engelbart’s mouse to a self-improving AI is for me such a flabbergasting notion – indicating such completely different ways of thinking about the problem – that I am trying to step back and find the differing sources of our differing intuitions.

Yudkowsky then goes on to critique Hanson’s hammer-example; In order to abstract those features that matter to you, you have to take the Inside View.

None of your examples talk about drawing new conclusions about the hammer by analogizing it to other things rather than directly assessing its characteristics in their own right, so it’s not all that good an example when it comes to making predictions about self-improving AI by putting it into a group of similar things that includes farming or industry.

10. AI Go Foom (Hanson)

Hanson tries to summarize Yudkowsky’s argument for a hard takeoff:

A machine intelligence can directly rewrite its entire source code, and redesign its entire physical hardware.  While human brains can in principle modify themselves arbitrarily, in practice our limited understanding of ourselves means we mainly only change ourselves by thinking new thoughts.   All else equal this means that machine brains have an advantage in improving themselves.

A mind without arbitrary capacity limits, that focuses on improving itself, can probably do so indefinitely.  The growth rate of its “intelligence” may be slow when it is dumb, but gets faster as it gets smarter.  This growth rate also depends on how many parts of itself it can usefully change.  So all else equal, the growth rate of a machine intelligence must be greater than the growth rate of a human brain.

No matter what its initial disadvantage, a system with a faster growth rate eventually wins.  So if the growth rate advantage is large enough then yes a single computer could well go in a few days from less than human intelligence to so smart it could take over the world.  QED.

To which Yudkowsky responds in the comment-section:

“Well, the format of my thesis is something like,

“When you break down the history of optimization into things like optimization resource, optimization efficiency, and search neighborhood, and come up with any reasonable set of curves fit to the observed history of optimization so far including the very few points where object-level innovations have increased optimization efficiency, and then you try to fit the same curves to an AI that is putting a large part of its present idea-production flow into direct feedback to increase optimization efficiency (unlike human minds or any other process witnessed heretofore), then you get a curve which is either flat (below a certain threshold) or FOOM (above that threshold).

If that doesn’t make any sense, it’s cuz I was rushed.”

In a later post Yudkowsky outlines his main argument in a very clear, comprehensive and convincing manner, so I won’t summarize or comment on this rudimentary version.

 

 

 

Advertisements
This entry was posted in AGI, AI-Foom debate, FAI, Fundamentals, Inside View vs. Outside View, intelligence explosion, Lesswrong Zusammenfassungen, singularity. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s