449. The Bedrock of Morality: Arbitrary?

449. The Bedrock of Morality: Arbitrary?

This post is illuminating, to put it charitably.

From the perspective of a Pebblesorter, saying that one p-should scatter a heap of 38 pebbles into two heaps of 19 pebbles is not p-arbitrary at all—it’s the most p-important thing in the world, and fully p-justified by the intuitively obvious fact that a heap of 19 pebbles is p-correct and a heap of 38 pebbles is not.

So which perspective should we adopt?  I answer that I see no reason at all why I should start sorting pebble-heaps.  It strikes me as a completely pointless activity.  Better to engage in art, or music, or science, or heck, better to connive political plots of terrifying dark elegance, than to sort pebbles into prime-numbered heaps.  A galaxy transformed into pebbles and sorted into prime-numbered heaps would be just plain boring.

The Pebblesorters, of course, would only reason that music is p-pointless because it doesn’t help you sort pebbles into heaps; the human activity of humor is not only p-pointless but just plain p-bizarre and p-incomprehensible; and most of all, the human vision of a galaxy in which agents are running around experiencing positive reinforcement but not sorting any pebbles, is a vision of an utterly p-arbitrary galaxy devoid of p-purpose.  The Pebblesorters would gladly sacrifice their lives to create a P-Friendly AI that sorted the galaxy on their behalf; it would be the most p-profound statement they could make about the p-meaning of their lives.

So which of these two perspectives do I choose?  The human one, of course; not because it is the human one, but because it is right.  I do not know perfectly what is right, but neither can I plead entire ignorance.

I’m sorry, but this doesn’t make any fucking sense whatsoever.

I fully expect that even if there is other life in the universe only a few trillions of lightyears away (I don’t think it’s local, or we would have seen it by now), that we humans are the only creatures for a long long way indeed who are built to do what is right.  That may be a moral miracle, but it is not a causal miracle.

There may be some other evolved races, a sizable fraction perhaps, maybe even a majority, who do some right things.  Our executing adaptation of compassion is not so far removed from the game theory that gave it birth; it might be a common adaptation.  But laughter, I suspect, may be rarer by far than mercy.  What would a galactic civilization be like, if it had sympathy, but never a moment of humor?  A little more boring, perhaps, by our standards.

This humanity that we find ourselves in, is a great gift.

Shut the fuck up. Last time I looked, humans were fucking contemptible (and p-contemptible for that matter).

(Btw, have you ever thought about transforming yourself into a psychopath because your inbuilt empathy-instinct hinders your hatred for humanity? Obviously, I personally haven’t, but I wonder how one could go about doing that? You would need some kind of anti-MDMA, maybe sniffing lots of coke? What about meth? Or steroids? Just kiddingof course!)

So I really must deny the charges of moral relativism:  I don’t think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that.  We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don’t.  Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don’t.  Human morality is p-arbitrary, but who cares?  P-arbitrariness is arbitrary.

Ok, so I tried to come up with a clever response to this ridiculous bullshit but it’s hopeless. This is just the most fucking pathetic crap I’ve ever read. (By Yudkowsky, obviously. Most human utterances are completely devoid of useful information anyway.)

Good comment by Roko:

“I think that your use of the word arbitrary differs from mine. My mind labels statements such as “we should preserve human laughter for ever and ever” with the “roko-arbitrary” label. Not that I don’t enjoy laughter, but there are plenty of things that I presently enjoy that, if I had the choice, I would modify myself to enjoy less. Activities such as enjoying making fun of other people, eating sweet foods, etc. It strikes me that the dividing line between “things I like but wish I didn’t like” and “things I like and want to keep liking” should be made in some non-roko-arbitrary way. One might incorporate my position with eliezer’s by saying that my concept of “rightness” relies heavily on my concept of arbitrariness, and that my concept of arbitrariness is clearly different to eliezer’s.”

Another good one by Roko:

“It also worries me quite a lot that eliezer’s post is entirely symmetric under the action of replacing his chosen notions with the pebble-sorter’s notions. This property qualifies as “moral relativism” in my book, though there is no point in arguing about the meanings of words.

My posts on universal instrumental values are not symmetric under replacing UIVs with some other set of goals that an agent might have. UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X.”

(Remember, this is the guy who was driven insane by thinking about problems with FAI. Maybe the future is less promising than you imagine…)

I have to applaud Komponisto for this comment:

“I’m really having trouble understanding how this isn’t tantamount to moral relativism — or indeed moral nihilism. The whole point of “morality” is that it’s supposed to provide a way of arbitrating between beings, or groups, with different interests — such as ourselves and Pebblesorters. Once you give up on that idea, you’re reduced, as in this post, to the tribalist position of arguing that we humans should pursue our own interests, and the Pebblesorters be damned. When a conflict arises (as it inevitably will), the winner will then be whoever has the bigger guns, or builds AI first.

Mind you, I don’t disagree that this is the situation in which we in fact find ourselves. But we should be honest about the implications. The concept of “morality” is entirely population-specific: when groups of individuals with common interests come into contact, “morality” is the label they give to their common interests. So for us humans, “morality” is art, music, science, compassion, etc. in short, all the things that we humans (as opposed to Pebblesorters) like. This is what I understand Eliezer to be arguing. But if this is your position, you may as well come out and admit that you’re a moral relativist, because this is the position that the people who are scared of moral relativism are in fact scared of. What they dread is a world in which Dennis could go on saying that Dennis-morality is what really matters, the rest of us disagree, war breaks out, Dennis kills us all, eats the whole pie, and is not spanked by any cosmic force. But this is indeed the world we live in.”

Yvain:

“Why “ought” vs. “p-ought” instead of “h-ought” vs. “p-ought”?

Sure, it might just be terminology. But change

“So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right.”

to

“So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right.”

and the difference between “because it is the human one” and “because it is h-right” sounds a lot less convincing.”

And another one by Yvain:

“But by Eliezer’s standards, it’s impossible for anyone to be a relativist about anything.

Consider what Einstein means when he says time and space are relative. He doesn’t mean you can just say whatever you want about them, he means that they’re relative to a certain reference frame. An observer on Earth may think it’s five years since a spaceship launched, and an observer on the spaceship may think it’s only been one, and each of them is correct relative to their reference frame.

We could define “time” to mean “time as it passes on Earth, where the majority of humans live.” Then an observer on Earth is objectively correct to believe that five years have passed since the launch. An observer on the spaceship who said “One year has passed” would be wrong; he’d really mean “One s-year has passed.” Then we could say time and space weren’t really relative at all, and people on the ground and on the spaceship were just comparing time to s-time. The real answer to “How much time has passed” would be “Five years.”

Does that mean time isn’t really relative? Or does it just mean there’s a way to describe it that doesn’t use the word “relative”?

Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word “easy” is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it’s mo wakariyasui translated as “j-easy”, which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.

Again, it’s just avoiding the word “relative” by talking in a confusing and unnatural way. And I don’t see the difference between talking about “easy” vs. “j-easy” and talking about “right” vs. “p-right”.”

And Rain (the guy who donates like 25.000$ per year to SIAI, suffers from depression, has a meaningless, “dilbertarian” government-job, plays WoW the whole time and is otherwise uber-cool)  disagrees with Yudkowsky, too. Here is his answer to another comment by a newcomer:

“My first problem (which may well be a missed inferential step) is with the assumed universality, within humanity, of a system of goals.

From what I’ve seen, others have the same objection; I do as well, and I have not seen an adequate response.

how is it that humans have discovered “right” while the Pebble-people have discovered only “p-right”? Even if I grant the assertion that all humans are using the same fundamental morality, and Alice and Bob would necessarily agree if they had access to the same information, how is it that humans have discovered “right” and not “h-right”?

From what I understand, everyone except Eliezer is more likely to hold the view that he found “h-right”, but he seems unwilling to call it that even when pressed on the matter. It’s another point on which I agree with your confusion.

as I understand it, Eliezer’s morality simply says “do whatever the computation tells you to do” without offering any help on what that computation actually looks like

We don’t have quite the skill to articulate it just yet, but possibly AI and neuroscience will help. If not, we might be in trouble.

As I said, I really feel like I’m missing some small, key detail or inferential step. Please, take pity on this neophyte and help me find The Way.

I assign a high probability that Eliezer is wrong, or at the least, providing a very incomplete model for metaethics. This sequence is the one I disagree with most. Personally, I think you have a good grasp of what he’s said, and its weaknesses.”

 

 

 

 

 

 

 

Advertisements
This entry was posted in CEV, Fundamentals, Lesswrong Zusammenfassungen, meta-ethics. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s