408. Is Morality Given?

408. Is Morality Given?

Again, it’s difficult to summarize , but here is one of the best parts:

Subhan:  “Suppose there’s an alien species somewhere in the vastness of the multiverse, who evolved from carnivores.  In fact, through most of their evolutionary history, they were cannibals.  They’ve evolved different emotions from us, and they have no concept that murder is wrong—”

Obert:  “Why doesn’t their society fall apart in an orgy of mutual killing?”

Subhan:  “That doesn’t matter for our purposes of theoretical metaethical investigation.  But since you ask, we’ll suppose that the Space Cannibals have a strong sense of honor—they won’t kill someone they promise not to kill; they have a very strong idea that violating an oath is wrong.  Their society holds together on that basis, and on the basis of vengeance contracts with private assassination companies.  But so far as the actual killing is concerned, the aliens just think it’s fun.  When someone gets executed for, say, driving through a traffic light, there’s a bidding war for the rights to personally tear out the offender’s throat.”

Obert:  “Okay… where is this going?”

Subhan:  “I’m proposing that the Space Cannibals not only have no sense that murder is wrong—indeed, they have a positive sense that killing is an important part of life—but moreover, there’s no path of arguments you could use to persuade a Space Cannibal of your view that murder is wrong.  There’s no fact the aliens can learn, and no chain of reasoning they can discover, which will ever cause them to conclude that murder is a moral wrong.  Nor is there any way to persuade them that they should modify themselves to perceive things differently.”

Obert:  “I’m not sure I believe that’s possible—”

Subhan:  “Then you believe in universally compelling arguments processed by a ghost in the machine.  For every possible mind whose utility function assigns terminal value +1, mind design space contains an equal and opposite mind whose utility function assigns terminal value—1.  A mind is a physical device and you can’t have a little blue woman pop out of nowhere and make it say 1 when the physics calls for it to say 0.”

Obert:  “Suppose I were to concede this.  Then?”

Subhan:  “Then it’s possible to have an alien species that believes murder is not wrong, and moreover, will continue to believe this given knowledge of every possible fact and every possible argument.  Can you say these aliens are mistaken?

Obert:  “Maybe it’s the right thing to do in their very different, alien world—”

Subhan:  “And then they land on Earth and start slitting human throats, laughing all the while, because they don’t believe it’s wrong.  Are they mistaken?

Obert:  “Yes.”

Subhan:  “Where exactly is the mistake?  In which step of reasoning?”

Obert:  “I don’t know exactly.  My guess is that they’ve got a bad axiom.”

Subhan:  “Dammit!  Okay, look.  Is it possible that—by analogy with the Space Cannibals—there are true moral facts of which the human species is not only presently unaware, but incapable of perceiving in principle?  Could we have been born defective—incapable even of being compelled by the arguments that would lead us to the light?  Moreover, born without any desire to modify ourselves to be capable of understanding such arguments?  Could we be irrevocably mistaken about morality—just like you say the Space Cannibals are?”

Obert:  “I… guess so…”

Subhan:  “You guess so?  Surely this is an inevitable consequence of believing that morality is a given, independent of anyone’s preferences!  Now, is it possible that we, not the Space Cannibals, are the ones who are irrevocably mistaken in believing that murder is wrong?”

Obert:  “That doesn’t seem likely.”

Subhan:  “I’m not asking you if it’s likely, I’m asking you if it’s logically possible!  If it’s not possible, then you have just confessed that human morality is ultimately determined by our human constitutions.  And if it is possible, then what distinguishes this scenario of ‘humanity is irrevocably mistaken about morality’, from finding a stone tablet on which is written the phrase ‘Thou Shalt Murder’ without any known justification attached?  How is a given morality any different from an unjustified stone tablet?”

That problem really freakes me out. Anyway, another great one:

Subhan:  “John McCarthy said:  ‘You say you couldn’t live if you thought the world had no purpose. You’re saying that you can’t form purposes of your own-that you need someone to tell you what to do. The average child has more gumption than that.’  For every kind of stone tablet that you might imagine anywhere, in the trends of the universe or in the structure of logic, you are still left with the question:  ‘And why obey this morality?’  It would be your decision to follow this trend of the universe, or obey this structure of logic.  Your decision—and your preference.

Obert:  “That doesn’t follow!  Just because it is my decision to be moral—and even because there are drives in me that lead me to make that decision—it doesn’t follow that the morality I follow consists merely of my preferences.  If someone gives me a pill that makes me prefer to not be moral, to commit murder, then this just alters my preference—but not the morality; murder is still wrong.  That’s common moral sense—”

Subhan:  “I beat my head against my keyboard!  What about scientific common sense?  If morality is this mysterious given thing, from beyond space and time—and I don’t even see why we should follow it, in that case—but in any case, if morality exists independently of human nature, then isn’t it a remarkable coincidence that, say, love is good?”

Obert:  “Coincidence?  How so?”

Subhan:  “Just where on Earth do you think the emotion of love comes from?  If the ancient Greeks had ever thought of the theory of natural selection, they could have looked at the human institution of sexual romance, or parental love for that matter, and deduced in one flash that human beings had evolved—or at least derived tremendous Bayesian evidence for human evolution.  Parental bonds and sexual romance clearly display the signature of evolutionary psychology—they’re archetypal cases, in fact, so obvious we usually don’t even see it.”

Obert:  “But love isn’t just about reproduction—”

Subhan:  “Of course not; individual organisms are adaptation-executers, not fitness-maximizers.  But for something independent of humans, morality looks remarkably like godshatter of natural selection.  Indeed, it is far too much coincidence for me to credit.  Is happiness morally preferable to pain?  What a coincidence!  And if you claim that there is any emotion, any instinctive preference, any complex brain circuitry in humanity which was created by some external morality thingy and not natural selection, then you are infringing upon science and you will surely be torn to shreds—science has never needed to postulate anything but evolution to explain any feature of human psychology—”

Obert:  “I’m not saying that humans got here by anything except evolution.”

Subhan:  “Then why does morality look so amazingly like a product of an evolved psychology?”

Obert:  “I don’t claim perfect access to moral truth; maybe, being human, I’ve made certain mistakes about morality—”

Subhan:  “Say that—forsake love and life and happiness, and follow some useless damn trend of the universe or whatever—and you will lose every scrap of the moral normality that you once touted as your strong point.  And I will be right here, asking, ‘Why even bother?’  It would be a pitiful mind indeed that demanded authoritative answers so strongly, that it would forsake all good things to have some authority beyond itself to follow.”

Obert:  “All right… then maybe the reason morality seems to bear certain similarities to our human constitutions, is that we could only perceive morality at all, if we happened, by luck, to evolve in consonance with it.”

Subhan:  “Horsemanure.”

Obert:  “Fine… you’re right, that wasn’t very plausible.  Look, I admit you’ve driven me into quite a corner here…

Indeed, confusion-level rising.

And here the end of the essay:

Obert:  “You know, when I reflect on this whole argument, it seems to me that your position has the definite advantage when it comes to arguments about ontology and reality and all that stuff—”

Subhan:  “‘All that stuff’?  What else is there, besides reality?”

Obert:  “Okay, the morality-as-preference viewpoint is a lot easier to shoehorn into a universe of quarks.  But I still think the morality-as-given viewpoint has the advantage when it comes to, you know, the actual morality part of it—giving answers that are good in the sense of being morally good, not in the sense of being a good reductionist.  Because, you know, there are such things as moral errors, there is moral progress, and you really shouldn’t go around thinking that murder would be right if you wanted it to be right.”

Subhan:  “That sounds to me like the logical fallacy of appealing to consequences.”

Obert:  “Oh?  Well, it sounds to me like an incomplete reduction—one that doesn’t quite add up to normality.”

I have to say, it doesn’t look good for Obert (which is rather scary).

Oh, and fuck Egan’s law.


This entry was posted in Lesswrong Zusammenfassungen, meta-ethics. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s