Discussions about Morality often repeat themselves. To prevent this Yudkowsky refers to this Post, in which he asks ten questions:
- It certainly looks like there is an important distinction between a statement like “The total loss of human life caused by World War II was roughly 72 million people” and “We ought to avoid a repeat of World War II.” Anyone who argues that these statements are of the same fundamental kind must explain away the apparent structural differences between them. What are the exact structural differences?
- We experience some of our morals and preferences as being voluntary choices, others as involuntary perceptions. I choose to play on the side of Rationality, but I don’t think I could choose to believe that death is good any more than I could choose to believe the sky is green. What psychological factors account for these differences in my perceptions of my own preferences?
- At a relatively young age, children begin to believe that while the teacher can make it all right to stand on your chair by giving permission, the teacher cannot make it all right to steal from someone else’s backpack. (I can’t recall the exact citation on this.) Do young children in a religious environment believe that God can make it all right to steal from someone’s backpack?
- Both individual human beings and civilizations appear to change at least some of their moral beliefs over the course of time. Some of these changes are experienced as “decisions”, others are experienced as “discoveries”. Is there a systematic direction to at least some of these changes? How does this systematic direction arise causally?
- To paraphrase Alfred Tarski, the statement “My car is painted green” is true if and only if my car is painted green. Similarly, someone might try to get away with asserting that the statement “Human deaths are bad” is true if and only if human deaths are bad. Is this valid?
- Suppose I involuntarily administered to you a potion which would cause you to believe that human deaths were good. Afterward, would you believe truly that human deaths were good, or would you believe falsely that human deaths were good?
- Although the statement “My car is painted green” is presently false, I can make it true at a future time by painting my car green. However, I can think of no analogous action I could take which would make it right to kill people. Does this make the moral statement stronger, weaker, or is there no sense in making the comparison?
- There does not appear to be any “place” in the environment where the referents of moral statements are stored, analogous to the place where my car is stored. Does this necessarily indicate that moral statements are empty of content, or could they correspond to something else? Is the statement 2 + 2 = 4 true? Could it be made untrue? Is it falsifiable? Where is its content?
- The phrase “is/ought” gap refers to the notion that no ought statement can be logically derived from any number of is statements, without at least one ought statement in the mix. For example, suppose I have a remote control with two buttons, and the red button kills an innocent prisoner, and the green button sets them free. I cannot derive the ought-statement, “I ought not to press the red button”, without both the is-statement “If I press the red button, an innocent will die” and the ought-statement “I ought not to kill innocents.” Should we distinguish mixed ought-statements like “I ought not to press the red button” from pure ought-statements like “I ought not to kill innocents”? If so, is there really any such thing as a “pure” ought-statement, or do they all have is-statements mixed into them somewhere?
- The statement “This painting is beautiful” could be rendered untrue by flinging a bucket of mud on the painting. Similarly, in the remote-control example above, the statement “It is wrong to press the red button” can be rendered untrue by rewiring the remote. Are there pure aesthetic judgments? Are there pure preferences?