AI Foom Debate: Post 35 – 40

35. Underconstrained Abstractions (Yudkowsky)

Yudkowsky replies to Hanson’s post “Test Near, Apply Far”.

When possible, I try to talk in concepts that can be verified with respect to existing history.

…But in my book this is just one trick in a library of methodologies for dealing with the Future, which is, in general, a hard thing to predict.

Let’s say that instead of using my complicated-sounding disjunction (many different reasons why the growth trajectory might contain an upward cliff, which don’t all have to be true), I instead staked my whole story on the critical threshold of human intelligence.  Saying, “Look how sharp the slope is here!” – well, it would sound like a simpler story.

…by talking about just that one abstraction and no others, I could make it sound like I was dealing in verified historical facts – humanity’s evolutionary history is something that has already happened.

But speaking of an abstraction being “verified” by previous history is a tricky thing.  There is this little problem of underconstraint – of there being more than one possible abstraction that the data “verifies”.

There are no established theories about what happens when the mind-architecture of the dominant species of the planet goes up by whole orders of magnitude. Heck, it’s very likely that the industrial revolution was caused by an increase in average IQ of about 15 points, and that’s almost nothing!

…And then, when you apply the abstraction going forward, there’s the question of whether there’s more than one way to apply it – which is one reason why a lot of futurists tend to dwell in great gory detail on the past events that seem to support their abstractions, but just assume a single application forward.

36. Beware Hockey Stick Plans (Hanson)

Dotcom business plans used to have infamous “hockey stick” market projections, a slow start that soon “fooms” into the stratosphere.

First of all, comparing superintelligent AIs with internet businesses is kinda bizarre.

Sure, according to the Outside View “explosions” of any kind are very rare. But they do happen. There are some firms like Google or Microsoft that essentially “exploded”.

Another important point:

The vast majority of AIs won’t hockey-stick. In fact, creating a good AI design appears to be even harder than creating Microsoft’s business plan.

37. Evolved Desires (Hanson)

There are two basic futuristic scenarios: A singleton and competition between different beings with different values. Hanson argues that most or all evolved creatures will have logarithmically satisfiable (I hope you know what I mean) preferences and thus they won’t try to take over the world because the utility of world-domination isn’t that much greater when you’re having log-preferences and so can’t outweigh the associated risks.

38. Sustained Strong Recursion (Yudkowsky)

Yudkowsky explains what he means by recursion. He also emphasizes that this recursion has to be strong and sustained.

If computing power doubled every 100 years that wouldn’t be strong enough. If computing power doubled every year but only for 3 years, that wouldn’t be sustained enough. Here’s a short example of strong, sustained recursion:

Let’s imagine a world where computing power doubles every year, and ems work at the labs that are responsible for this progress. After 12 months the em-researchers can research twice as fast and so the next doubling takes only 6 months (let’s suppose). After 18 months they are 4 times as fast, the next doubling takes only 3 months, and so on. Now imagine what would happen if this goes on for 30 years… And here we’re dealing with minds that stay the same, and become merely faster, not fundamentally smarter due to changes in mind-design.

39. Friendly Projects vs. Products (Hanson)

According to Hanson there are two distinct issues concerning “Friendliness”:

First, Friendly Projects are concerned with the following question:

Will the race make winners and losers, and how will winners treat losers?

Well, if the winners, e.g. superintelligent AI, are far more powerful than all other players, then there is no need for trade because those beings couldn’t profit from cooperation. (Forget the Law of Comparative Advantage. When was the last time you traded with a rat?)

Except, if someone designed Friendly Products:

Will the creatures cooperate with or rebel against their creators?

If the superintelligent AI is friendly and shares our values everything is fine.

Needless to say that Yudkowsky endorses the Friendly Product Strategy.

40. Is That Your True Rejection? (Yudkowsky)

Weird ideas like transhumanism, the singularity, FAI are often immediately disregarded. The verbally offered reasons are often completely fictional. The true rejections are often…

… matters of pattern recognition, rather than verbal thought: the thesis matches against “strange weird idea” or “science fiction” or “end-of-the-world cult” or “overenthusiastic youth”. So immediately, at the speed of perception, the idea is rejected.

(Needless to say that this also happens a lot in politics. )

In the case of Yudkowsky his lack of credentials is often mentioned as a reason to disregard his claims. But Yudkowsky believes that even if he had a PhD this wouldn’t matter all that much:

But I also don’t have a PhD when I talk about human rationality, so why is the same objection not raised there?

And more to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say.  Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.

They would say, “Why should I believe you?  You’re just some guy with a PhD! There are lots of those.  Come back when you’re well-known in your field and tenured at a major university.”

Associating with academic folk also didn’t seem all that helpful:

It has similarly been a general rule with the Singularity Institute that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes.  “Do you do any sort of code development?  I’m not interested in supporting an organization that doesn’t develop code”—> OpenCog—> nothing changes.  “Eliezer Yudkowsky lacks academic credentials”—> Professor Ben Goertzel installed as Director of Research—> nothing changes.  The one thing that actually has seemed to raise credibility, is famous people associating with the organization, like Peter Thiel funding us, or Ray Kurzweil on the Board.

So, what has this to do with the AI Foom Debate? Well, Hanson is a pretty fanatical libertarian and he often refers to FAI as the “God, to rule us all”, and to Yudkowsky’s plans as “total tech war” or “taking over the world” and such. He also said he’s very surprised and scared that so many OB-readers endorse the notion of FAI which seems to me that he doesn’t understand this whole issue. If you’re scared of FAI something has gone wrong.

But here is a very good reply by Hanson:

“There need not be just one “true objection”; there can be many factors that together lead to an estimate. Whether you have a Ph.D., and whether folks with Ph.D. have reviewed your claims, and what they say, can certainly be relevant. Also remember that you should care lots more about the opinions of experts that could build on and endorse your work, than about average Joe opinions. Very few things ever convince average folks of anything unusual; target a narrower audience.”

I agree. Yudkowsky’s marketing strategies are sometimes rather debatable. But I think he knows that and in one comment he said that he simply lacks the emotional strength to be humble and kind. And Yudkowsky would commit suicide in an academic environment, that’s for sure. I mean, he had to quit school at age 12 because he couldn’t bear it anymore.

Advertisements
This entry was posted in AGI, AI-Foom debate, CEV, existential risks, FAI, Fundamentals, intelligence explosion, Lesswrong Zusammenfassungen, singularity. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s