433. Interpersonal Morality – 435. Detached Lever Fallacy

433. Interpersonal Morality

Transpersonal moral intuitions are not necessarily false-to-fact, so long as you don’t expect your arguments cast in “universal” terms to sway a rock.  There really is such a thing as the psychological unity of humankind.  Read a morality tale from an entirely different culture; I bet you can figure out what it’s trying to argue for, even if you don’t agree with it.

The problem arises when you try to apply the universalizability instinct to say, “If this argument could not persuade an UnFriendly AI that tries to maximize the number of paperclips in the universe, then it must not be a good argument.”

There are No Universally Compelling Arguments, so if you try to apply the universalizability instinct universally, you end up with no morality.  Not even universalizability; the paperclip maximizer has no intuition of universalizability.  It just chooses that action which leads to a future containing the maximum number of paperclips.

There are some things you just can’t have a moral conversation with.  There is not that within them that could respond to your arguments.  You should think twice and maybe three times before ever saying this about one of your fellow humans—but a paperclip maximizer is another matter.  You’ll just have to override your moral instinct to regard anything labeled a “mind” as a little floating ghost-in-the-machine, with a hidden core of perfect emptiness, which could surely be persuaded to reject its mistaken source code if you just came up with the right argument.  If you’re going to preserve universalizability as an intuition, you can try extending it to all humans; but you can’t extend it to rocks or chatbots, nor even powerful optimization processes like evolutions or paperclip maximizers.

Right, not every possible mind has to be persuaded by moral arguments of mine. There are some wicked minds out there and they just don’t give a fuck about anything but, say, paperclips. But I get suspicious if almost all possible minds disagree with my morality and only the species of which I’m a member agrees with me, if that.

But I’m really not that hard to please. It would be great if only sufficiently intelligent and sane minds had similar moral beliefs to mine. Not everyone has to agree with me. Only the good guys.

The question of how much in-principle agreement would exist among human beings about the transpersonal portion of their values, given perfect knowledge of the facts and perhaps a much wider search of the argument space, is not a matter on which we can get much evidence by observing the prevalence of moral agreement and disagreement in today’s world.  Any disagreement might be something that the truth could destroydependent on a different view of how the world is, or maybe just dependent on having not yet heard the right argument.  It is also possible that knowing more could dispel illusions of moral agreement, not just produce new accords.

But does that question really make much difference in day-to-day moral reasoning, if you’re not trying to build a Friendly AI?

Well, YES, it does. If I knew for sure that CEV is possible I would have much more confidence in the desirability and possibility of FAI and thus would support SIAI much more eagerly (or similar institutions).  If I knew for sure that CEV is impossible I would lead a more “hedonic” life. (Because the impossibility of CEV implies the impossibility of any reasonable morality whatsoever. If there existed some kind of objective morality, CEV would find it. Correct me, if I’m wrong. )

434. Humans in Funny Suits

Anthropomorphism is rampant in science fiction. Most folks don’t realize that alien species could be very, very different from our own so they write sci-fi-series like Star Trek in which most aliens look like “humans in funny suits”.

435. Detached Lever Fallacy

Anthropomorphism was also responsible for philosophies like behaviorism and the whole “blank-slate”-paradigma. We don’t see our own cognitive mechanisms which just operate invisible in the background, and so we think that our very specific, neural structures either don’t determine our behavior or that they are universally found in all minds. (On a related note: Most highly intelligent folks also fail to realize just how intelligent they really are. Their intelligence is just is there, invisible in the background, so they think that other people could be as knowledgeable as they are if they only studied more. It’s also a nice self-serving belief to have. And you get liberal-, anti-racism and egalitarian-creds for denying biological IQ-differences and don’t have to worry about the PC-police….)

Ok, and if you really believe this whole blank-slate myth you should just..

…try raising a fish as a Mormon or sending a lizard to college, and you’ll soon acquire an appreciation of how much inbuilt genetic complexity is required to “absorb culture from the environment”.

It is a general principle,

…that the world is deeper by far than it appears.  As with the many levels of physics, so too with cognitive science.  Every word you see in print, and everything you teach your children, are only surface levers controlling the vast hidden machinery of the mind.  These levers are the whole world of ordinary discourse: they are all that varies, so they seem to be all that exists: perception is the perception of differences.

This fallacy of anthropomorphism is of course pretty dangerous if you try to build a friendly AI by raising it within a loving family:

It’s easier to program in unconditional niceness, than a response of niceness conditional on the AI being raised by kindly but strict parents.  If you don’t know how to do that, you certainly don’t know how to create an AI that will conditionally respond to an environment of loving parents by growing up into a kindly superintelligence.  If you have something that just maximizes the number of paperclips in its future light cone, and you raise it with loving parents, it’s still going to come out as a paperclip maximizer.  There is not that within it that would call forth the conditional response of a human child.  Kindness is not sneezed into an AI by miraculous contagion from its programmers.  Even if you wanted a conditional response, that conditionality is a fact you would have to deliberately choose about the design.

Advertisements
This entry was posted in CEV, Lesswrong Zusammenfassungen, meta-ethics. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s