401. The Moral Void – 405. 2 of 10, not 3 total

401. The Moral Void

What if there were an objective morality, somehow baked into the fabric of the universe? Imagine some stone tablet which reads: “PAIN IS GOOD”

Would you start torturing people?

Otherwise good people kill and hurt others, all in the name of god. God says “Killing infidels is good” and even if you don’t like to do it, it’s nonetheless your duty. Or so they think .

This is the danger with “objective moralities”. You’ll ignore your own intuitions and commit cruelties, all because some higher authority said so.

Imagine you could determine the inscription of that stone tablet. What would it say?  To put it differently: What do you wish that tablet would say?

But, why don’t you just do that?

Hm, I see Yudkowsky’s point, but it’s not very convincing. Sometimes you have to trust your intuitions and sometimes you have to follow the orders of some higher authority, like math, logic, Bayes, etc.

If there is some genuine, objective morality out there and it tells you to do something which sounds prima facie like a bad idea, well, you should probably do it anyway. Just like the idea of effective altruism sounds at first kinda stupid, but upon reflection, it makes a lot of sense.

And remember – Yudkowsky is the one who says that we shouldn’t trust our intuitions and “shut up and multiply”! But how do we know which moral intuitions are false (like the ones responsible for Scope-Insensitivity) and which are true?

OTOH, I think Konkvistador has a point:

I find it funny that many of the people here who where pretty much freaked out by the idea of “objective morality built into the fabric of the universe” not really mattering for humans, yet when it comes to mythology don’t have a problem criticizing Abraham for being willing to sacrifice his son because God told him too.

This morality-stuff is rather tricky.

402. Created Already in Motion

I have no idea what Yudkowsky wants to tell us with this post.

Something about a tortoise that doesn’t comprehend Modus Ponens. So, yeah, there are really dumb minds whom you can’t convince by any argument whatsoever.

403. I’d Take It

What would you do with 10 Trillion Dollar?

404. The Bedrock of Fairness

A dialogue about how to define “fairness” and morality, although with no clear conclusion (at least I didn’t see it).

Complicated discussion in the comment-section about CEV.

405. 2 of 10, not 3 total

Just a short remark about the comment policy.

Advertisements
This entry was posted in Fundamentals, Lesswrong Zusammenfassungen, meta-ethics. Bookmark the permalink.

2 Responses to 401. The Moral Void – 405. 2 of 10, not 3 total

  1. muflax says:

    About “Created Already in Motion”. I think Eliezer’s point is that a mind has to build with certain axioms already in place. You can’t convince a generic mind to use modus ponens, it must already be designed to do so. (How evolution managed to pull that off is anyone’s guess.)

    Similarly, basic moral intuitions also have to be built-in, like “suffering is bad”. A generic AI might *understand* that pushing people in front of trains is “bad”, but will simply not care about “bad”. It’s just a label. So that understanding needs to be hooked up to some decision algorithm as well.

    So he’s just arguing for moral externalism, i.e. that moral beliefs are not automatically motivating, you really need to have an independent motivation as well. (Which makes beliefs about morality a bit redundant, I think. Overall, I’m not entirely convinced by externalism, though it’s certainly a good heuristic.)

    I’m sure Eliezer took the Achilles/Tortoise example from GEB, where it’s used to make a similar point about formal systems, i.e. that you don’t have an ultimate set of logical axioms every mind must agree with, but that you simply have to accept certain rules (like modus ponens) or you don’t get anywhere. Can’t reason a rock into using induction.

    Thus, you can’t just build a general AI and hope for it to discover True Morality on its own. Whatever (fundamental) motivation we forget to implement will simply be missing forever. Which sucks. (Though I’m not entirely convinced by that, at least once you have sufficiently intelligent minds, but it certainly doesn’t sound like a good idea to trust the Gods of Game Theory to fix your design flaws. Azathoth doesn’t have a good track record with that kinda thing.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s