455. No License to Be Human

455. No License to Be Human

Sigh, another post on meta-ethics.

Don’t want to summarize, just quoting, doesn’t make any sense…

And as for that value framework being valuable because it’s human—why, it’s just the other way around: humans have received a moral gift, which Pebblesorters lack, in that we started out interested in things like happiness instead of just prime pebble heaps.

Is he actually boasting about the ludicrousness of his own view, or is this some kind of joke, or what?

…But from a moral perspective, the wonder is that there are these human brains around that happen to want to help each other—a great wonder indeed, since human brains don’t define rightness, any more than natural selection defines rightness.

..And that’s why I object to the term “h-right”.  I am not trying to do what’s human.  I am not even trying to do what is reflectively coherent for me.  I am trying to do what’s right.

Good comment by Jadagul:

“Eliezer: Good post, as always, I’ll repeat that I think you’re closer to me in moral philosophy than anyone else I’ve talked to, with the probable exception of Richard Rorty, from whom I got many of my current views. (You might want to read Contingency, Irony, Solidarity; it’s short, and it talks about a lot of the stuff you deal with here). That said, I disagree with you in two places. Reading your stuff and the other comments has helped me refine what I think; I’ll try to state it here as clearly as possible.

1) I think that, as most people use the words, you’re a moral relativist. I understand why you think you’re not. But the way most people use the word ‘morality,’ it would only apply to an argument that would persuade the ideal philosopher of perfect emptiness. You don’t believe any such arguments exist; neither do I. Thus neither of us think that morality as it’s commonly understood is a real phenomenon. Think of the priest in War of the Worlds who tried to talk to the aliens, explaining that since we’re both rational beings/children of God, we can persuade them not to kill us because it’s wrong. You say (as I understand you) that they would agree that it’s wrong, and just not care, because wrong isn’t necessarily something they care about. I have no problem with any claim you’ve made (well, that I’ve made on your behalf) here; but at this point the way you’re using the word ‘moral’ isn’t a way most people would use it. So you should use some other term altogether.

2) I like to maintain a clearer focus on the fact that, if you care about what’s right, I care about what’s right_1, which is very similar to but not the same as what’s right. Mainly because it helps me to remember there are some things I’m just not going to convince other people of (e.g. I don’t think I could convince the Pope that God doesn’t exist. There’s no fact pattern that’s wholly inconsistent with the property god_exists, and the Pope has that buried deep enough in its priors that I don’t think it’s possible to root it out). But (as of reading your comment on yesterday’s post) I don’t think we disagree on the substance, just on the emphasis.

Thanks for an engaging series of posts; as I said, I think you’re the closest or second-closest I’ve ever come across to someone sharing my meta-ethics.”

Great comment by Yvain:

“I was one of the people who suggested the term h-right before. I’m not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don’t understand where it stops being relativist.

I agree that some human assumptions like induction and Occam’s Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for choosing it out of a belief-space.

For example, after recursive justification hits bottom, I keep Occam and induction because I suspect they reflect the way the universe really works. I can’t prove it without using them. But we already know there are some things that are true but can’t be proven. I think one of those things is that reality really does work on inductive and Occamian principles. So I can choose these two beliefs out of belief-space by saying they correspond to reality.

Some other starting assumptions ground out differently. Clarence Darrow once said something like “I hate spinach, and I’m glad I hate it, because if I liked it I’d eat it, and I don’t want to eat it because I hate it.” He’s was making a mistake somewhere! If his belief is “spinach is bad”, it probably grounds out in some evolutionary reason like insufficient energy for the EEA. But that doesn’t justify his current statement “spinach is bad”. His real reason for saying “spinach is bad” is that he dislikes it. You can only choose “spinach is bad” out of belief-space based on Clarence Darrow’s opinions.

One possible definition of “absolute” vs. “relative”: a belief is absolutely true if people pick it out of belief-space based on correspondence to reality; if people pick it out of belief-space based on other considerations, it is true relative to those considerations.

“2+2=4” is absolutely true, because it’s true in the system PA, and I pick PA out of belief-space because it does better than, say, self-PA would in corresponding to arithmetic in the real world. “Carrots taste bad” is relatively true, because it’s true in the system “Yvain’s Opinions” and I pick “Yvain’s Opinions” out of belief-space only because I’m Yvain.

When Eliezer say X is “right”, he means X satisfies a certain complex calculation. That complex calculation is chosen out of all the possible complex-calculations in complex-calculation space because it’s the one that matches what humans believe.

This does, technically, create a theory of morality that doesn’t explicitly reference humans. Just like intelligent design theory doesn’t explicitly reference God or Christianity. But most people believe that intelligent design should be judged as a Christian theory, because being a Christian is the only reason anyone would ever select it out of belief-space. Likewise, Eliezer’s system of morality should be judged as a human morality, because being a human is the only reason anyone would ever select it out of belief-space.

That’s why I think Eliezer’s system is relative. I admit it’s not directly relative, in that Eliezer isn’t directly picking “Don’t murder” out of belief-space every time he wonders about murder, based only on human opinion. But if I understand correctly, he’s referring the question to another layer, and then basing that layer on human opinion.

An umpire whose procedure for making tough calls is “Do whatever benefits the Yankees” isn’t very fair. A second umpire whose procedure is “Always follow the rules in Rulebook X” and writes in Rulebook X “Do whatever benefits the Yankees” may be following a rulebook, but he is still just as far from objectivity as the last guy was.

I think the second umpire’s call is “correct” relative to Rulebook X, but I don’t think the call is absolutely correct.”

This entry was posted in CEV, Lesswrong Zusammenfassungen, meta-ethics. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s