Fun Theory: Post 14 – 19

14. Amputation of Destiny

Yudkowsky describes a book by Ian Banks that is about a society, called the Culture, that consists of happy, intelligent, long-living humans, low-grade transhumanists, so to speak. But everything is controlled by Minds, superintelligent AIs.

Yudkowsky calls this an amputation of destiny. We humans want to be the main players, we need to be needed. If there are superintelligent AIs that run the show, our lives would feel meaningless and insignificant. This is yet another reason why we shouldn’t create sentient FAIs.

So Yudkowsky acknowledges that we have the desire to feel meaningful, and therefore we should create artificial constraints in order to fulfill those desires. We prevent or at least hinder the accumulation of knowledge, scientific progress, etc. in order to feel like we’re needed. But to me that looks like a slippery slope. If we go this route we can also go straight to wireheading.

I really don’t understand this. Many commenters also point out that Yudkowsky’s moral intuitions probably aren’t shared by the majority of humankind.

Here is e.g. a comment by Unknown:

“My guess is that Eliezer will be horrified at the results of CEV– despite the fact that most people will be happy with it.

This is obvious given the degree to which Eliezer’s personal morality diverges from the morality of the human race.”

To which Yudkowsky responds:

” Unknown, the question is how much of this divergence is due to (a) having moved further toward reflective equilibrium, (b) unusual mistakes in answering a common question, (c) being an evil mutant, (d) falling into an uncommon but self-consistent attractor.”

But Carl Shulman thinks the above distinctions are rather arbitrary (with which I agree):

“I’m just confused by your distinction between mutation and other reasons to fall into different self-consistent attractors. I could wind up in one reflective equilibrium than another because I happened to consider one rational argument before another, because of early exposure to values, genetic mutations, infectious diseases, nutrition, etc, etc. It seems peculiar to single out the distinction between genetic mutation and everything else. I thought ‘mutation’ might be a shorthand for things that change your starting values or reflective processes before extensive moral philosophy and reflection, and so would include early formation of terminal values by experience/imitation, but apparently not.”

It’s quite depressing. The more I read about Yudkowsky’s metaethics and Fun Theory the more existential ennui I feel.

15. Dunbar’s Function

There is something called Dunbar’s Number, which describes the optimal community size. In the case of humans it’s 150.

This number is probably an upper bound, since most hunter-gatherer clans consisted of 30-60 people. 150 is more the cultural lineage of related clans.

The problem is that we live in a world with 7 billion people, which creates all kind of problems.

If you work in a large company, you probably don’t know your tribal chief on any personal level, and may not even be able to get access to him.  For every rule within your company, you may not know the person who decided on that rule, and have no realistic way to talk to them about the effects of that rule on you.  Large amounts of the organizational structure of your life are beyond your ability to control, or even talk about with the controllers; directives that have major effects on you, may be handed down from a level you can’t reach.

Essentially, you feel like a character from “The Castle”.

But there is an even bigger problem: The illusion of increased competition (I would say this isn’t an illusion there just is increased competition):

There’s that famous survey which showed that Harvard students would rather make $50,000 if their peers were making $25,000 than make $100,000 if their peers were receiving $200,000—and worse, they weren’t necessarily wrong about what would make them happy.  With a fixed income, you’re unhappier at the low end of a high-class neighborhood than the high end of a middle-class neighborhood.

But in a “neighborhood” the size of Earth—well, you’re actually quite unlikely to run into either Bill Gates or Angelina Jolie on any given day.  But the media relentlessly bombards you with stories about the interesting people who are much richer than you or much more attractive, as if they actually constituted a large fraction of the world.  (This is a combination of biased availability, and a difficulty in discountingtiny fractions.)

There is always someone who is smarter, faster, prettier than you are.

Now you could say that our hedonic relativism is one of the least pleasant aspects of human nature.  And I might agree with you about that.  But I tend to think that deep changes of brain design and emotional architecture should be taken slowly, and so it makes sense to look at the environment too.

I have another solution: Wireheading. Easy as pie.

If you lived in a world the size of a hunter-gatherer band, then it would be easier to find something important at which to be the best—or do something that genuinely struck you as important, without becoming lost in a vast crowd of others with similar ideas.

Yeah, I often have the feeling that it really doesn’t matter what I do. Even if I studied hard and became scientist or something. There are already millions of scientists out there.

He ends on a more optimistic note:

But if people keep getting smarter and learning more—expanding the number of relationships they can track, maintaining them more efficiently—and naturally specializing further as more knowledge is discovered and we become able to conceptualize more complex areas of study—and if the population growth rate stays under the rate of increase of Dunbar’s Function—then eventually there could be a single community of sentients, and it really would be a single community.

16. Free to Optimize

If a FAI constantly helped us we would have less fun than possible. We want to make our own choices and don’t want to live under an ultimative paternalistic regime. Yudkowsky envisions a FAI that implements “fair” rules and just kinda waits in the background. With fair rules he presumably means no unwanted death, no sickness, etc. But we could e.g. still make mistakes in our relationships.

17. The Uses of Fun (Theory)

There are three reasons I’m talking about Fun Theory, some more important than others:

  1. If every picture ever drawn of the Future looks like a terrible place to actually live, it might tend to drain off the motivation to create the future.  It takes hope to sign up for cryonics.
  2. People who leave their religions, but don’t familiarize themselves with the deep, foundational, fully general arguments against theism, are at risk of backsliding.  Fun Theory lets you look at our present world, and see that it is not optimized even for considerations like personal responsibility or self-reliance.  It is the fully general reply to theodicy.
  3. Going into the details of Fun Theory helps you see that eudaimonia is actually complicated —that there are a lot of properties necessary for a mind to lead a worthwhile existence.  Which helps you appreciate just how worthless a galaxy would end up looking (with extremely high probability) if it was optimized by something with a utility function rolled up at random.

18. Growing Up is Hard

Start modifying the pieces in ways that seem like “good ideas”—making the frontal cortex larger, for example—and you start operating outside the ancestral box of parameter ranges.  And then everything goes to hell.  Why shouldn’t it?  Why would the brain be designed for easy upgradability?

Modifying the brain only slightly can have huge side effects. Think about it. There are almost no drugs that are uniformly a good thing and don’t have any side effects. Furthermore, the good effects of those drugs are pretty small in absolute terms. Taking modafinil, considered one of the strongest cognitive enhancers, raises your IQ by only a few points, if at all. It only makes you more alert and awake. It’s not more effective than a good night’s sleep.

If taking a few chemicals can have such negative side effects, just imagine what could happen, if somebody enlarged (whatever that means exactly) your whole prefrontal cortex.

19. Changing Emotions

We underestimate the difficulty of changing our emotions. Yudkowsky uses the example of a male-to-female transformation to illustrate the hard problems associated with alterations of our emotional make-up.

This topic is of course fairly confusing, since I don’t even understand what it means to be “me”, let alone, what would happen to this “me”, if it became female. That’s probably just crazy talk.

And we’re talking about some relatively minor transformation from one sex to another of the same species. Adding 10000 IQ-points and preserving you identity is likely to be much harder. Fortunately, I don’t consider preserving my identity that important.

He concludes:

It seems to me that there is just an irreducible residue of very hard problems associated with an adult version of humankind ever coming into being.

And emotions would be among the most dangerous targets of meddling.  Make the wrong shift, and you won’t want to change back.

We can’t keep these exact human emotions forever.  Anyone want to still want to eat chocolate-chip cookies when the last sun grows cold?  I didn’t think so.

But if we replace our emotions with random die-rolls, then we’ll end up wanting to do what is prime, instead of what’s right.

Some emotional changes can be desirable, but random replacement seems likely to be undesirable on average.  So there must be criteria that distinguish good emotional changes from bad emotional changes.  What are they?

 

Advertisements
This entry was posted in ethics, FAI, Fun Theory, Joy in the merely Real, Lesswrong Zusammenfassungen. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s