AI-Foom Debate: Post 1 – 6

This is one of the most important Sequences for me, I’ll depart from the usual format.

Prologue

1. Fund UberTool? (Robin Hanson)

Hanson offers a nice analogy of a recursively self-improving AI in economic terms, but he doesn’t really argue that something like this is unlikely or impossible.

Imagine you are a venture capitalist reviewing a proposed business plan. UberTool Corp has identified a candidate set of mutually aiding tools, and plans to spend a millions pushing those tools through a mutual improvement storm.  While UberTool may sell some minor patents along the way, UberTool will keep its main improvements to itself and focus on developing tools that improve the productivity of its team of tool developers.  In fact, UberTool thinks that its tool set is so fantastically capable of mutual improvement, and that improved versions of its tools would be so fantastically valuable and broadly applicable, UberTool does not plan to stop their closed self-improvement process until they are in a position to suddenly burst out and basically “take over the world.”

Now given such enormous potential gains, even a very tiny probability that UberTool could do what they planned might enticed you to invest in them.  But even so, just what exactly would it take to convince you UberTool had even such a tiny chance of achieving such incredible gains?

2. Engelbart as UberTool? (Robin Hanson)

Hanson uses D. Engelbart’s (the inventor of the computer-mouse) company as a possible real life example for an “UberTool-corp” described in the previous post.

Douglas Engelbart is the person I know who came closest to enacting such a UberTool plan.  His seminal 1962 paper, “Augmenting Human Intellect: A Conceptual Framework” proposed using computers to create such a rapidly improving tool set.  He understood not just that computer tools were especially open to mutual improvement, but also a lot about what those tools would look like.

3. Friendly Teams (Robin Hanson)

It is not so much that Engelbart missed a few key insights about what computer productivity tools would look like.  I doubt if it would have made much different had he traveled in time to see a demo of modern tools.  The point is that most tools require lots more than a few key insights to be effective – they also require thousands of small insights that usually accumulate from a large community of tool builders and users.

Small teams have at times suddenly acquired disproportionate power, and I’m sure their associates who anticipated this possibility used the usual human ways to consider that team’s “friendliness.”  But I can’t recall a time when such sudden small team power came from an UberTool scenario of rapidly mutually improving tools.

Some say we should worry that a small team of AI minds, or even a single mind, will find a way to rapidly improve themselves and take over the world.  But what makes that scenario reasonable if the UberTool scenario is not?

4. Friendliness Factors (Robin Hanson)

Imagine several firms competing to make the next generation of some product, like a lawn mower or cell phone.  What factors influence variance in their product quality (relative to cost)?  That is, how much better will the best firm be relative to the average, second best, or worst?   Larger variance factors should make competitors worry more that this round of competition will be their last.  Here are a few factors:

  1. Resource Variance – the more competitors vary in resources, the more performance varies.
  2. Cumulative Advantage – the more prior wins help one win again, the more resources vary.
  3. Grab It First – If the cost to grab and defend a resource is much less than its value, the first to grab can gain a further advantage.
  4. Competitor Count – with more competitors, the best exceeds the second best less, but exceeds the average more.
  5. Competitor Effort – the longer competitors work before their performance is scored, or the more resources they spend, the more scores vary.
  6. Lumpy Design – the more quality depends on a few crucial choices, relative to many small choices, the more quality varies.
  7. Interdependence – When firms need inputs from each other, winner gains are also supplier gains, reducing variance.
  8. Info Leaks – the more info competitors can gain about others’ efforts, the more the best will be copied, reducing variance.
  9. Shared Standards – competitors sharing more standards and design features, in info, process, or product, can better understand and use info leaks.
  10. Legal Barriers – may prevent competitors from sharing standards, info, inputs.
  11. Anti-Trust –  Social coordination may prevent too much winning by a few.
  12. Sharing Deals – If firms own big shares in each other, or form a coop, or just share values, may mind less if others win.  Lets tolerate more variance, but also share more info.
  13. Niche Density – When each competitor can adapt to a different niche, they may all survive.
  14. Quality Sensitivity – demand/success may be very sensitive, or not very sensitive, to quality.
  15. Network Effects – Users may prefer to use the same product regardless of its quality.

Hm. I think trying to analyze superintelligent AIs or the singularity from an economics-perspective is somewhat misleading. The biggest difference between superintelligent AIs and humans (or humans and chimpanzees) is the hugely different mind-design, “intelligence” for short and only point 6 and point 14 address this issue somewhat.  Economics in general totally ignores the factor of intelligence. (Actually, almost everybody ignores IQ/intelligence not least ’cause it’s a political minefield and rather depressing.)

Or maybe Hanson doesn’t share Yudkowsky’s intuition (with which I pretty much agree) that intelligence is hugely important. I don’t know.

5. The Weak Inside View (Yudkowsky)

Using the Outside View on such speculative topics as the singularity isn’t very helpful according to Yudkowsky. You shouldn’t compare the arising of a superintelligence to such “phase transitions” like the invention of farming or the industrial revolution. That’s futurism that tries to look formal and precise but deep down there is nothing more than mere surface analogies. If you deal with entirely novel things you have to use the “Weak Inside View”. Sure, you don’t understand the topic well enough, to make quantitative predictions, but you can make some qualitative ones.

6. Setting The Stage (Hanson)

Hanson summarizes the points he and Yudkowsky seem to agree on:

  1. Machine intelligence would be a development of almost unprecedented impact and risk, well worth considering now.
  2. Feasible approaches include direct hand-coding, based on a few big and lots of little insights, and emulations of real human brains.
  3. Machine intelligence will more likely than not appear with a century, even if the progress rate to date does not strongly suggest the next few decades.
  4. Many people say silly things here, and we do better to ignore them than to try to believe the opposite.
  5. Math and deep insights (especially probability) can be powerful relative to trend-fitting and crude analogies.
  6. Long term historical trends are suggestive of future events, but not strongly so.
  7. Some should be thinking about how to create “friendly” machine intelligences.

Fair enough. Where do they disagree?

We seem to disagree modestly about the relative chances of the emulation and direct-coding approaches; I think the first and he thinks the second is more likely to succeed first.  Our largest disagreement seems to be on the chances that a single hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I’d put it as less than 1% and he seems to put it as over 10%.                       

In one comment Yudkowsky offers a probability of even more than 70% which is rather high, if you ask me!

There are also some crucial methodological differences: Hanson relies heavily on the outside view and established and time-proven methods of reasoning, whereas Yudkowsky uses his own,  more idiosyncratic concepts.

Advertisements
This entry was posted in AGI, AI-Foom debate, existential risks, intelligence explosion, Lesswrong Zusammenfassungen, singularity. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s