581. BHTV: de Grey and Yudkowsky – 583. Visualizing Eutopia

581. BHTV: de Grey and Yudkowsky

Just a Bloggingheads-Interview.

582. For the People Who Are Still Alive

Ever since I realized that physics seems to tell us straight out that we live in a Big World, I’ve become much less focused on creating lots of people, and much more focused on ensuring the welfare of people who are already alive.

If your decision to not create a person means that person will never exist at all, then you might, indeed, be moved to create them, for their sakes.  But if you’re just deciding whether or not to create a new person here, in your own Hubble volume and Everett branch, then it may make sense to have relatively lower populations within each causal volume, living higher qualities of life.  It’s not like anyone will actually fail to be born on account of that decision – they’ll just be born predominantly into regions with higher standards of living.

Hm, I’m not so sure about that. And Yudkowsky admits that this reasoning is not very convincing:

Am I sure that this statement, that I have just emitted, actually makes sense?

Not really.  It dabbles in the dark arts of anthropics, and the Dark Arts don’t get much murkier than that.  Or to say it without the chaotic inversion:  I am stupid with respect to anthropics.

Glad to hear that. No wonder, anthropics confuses me to no end.

What I do want for myself, is for the largest possible proportion of my future selves to lead eudaimonic existences, that is, to be happy.  This is the “probability” of a good outcome in my expected utility maximization.  I’m not concerned with having more of me – really, there are plenty of me already – but I do want most of me to be having fun.

This position implies pushing for FAI, and if that’s impossible we should instantly annihilate everything. I love it!

The conclusion:

It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.

And that’s why, when there is research to be done, I do it not just for all the future babies who will be born – but, yes, for the people who already exist in our local region, who are already our responsibility.

For the good of all of us, except the ones who are dead.

Good comment by Steven (Kaas, I assume):

“I’m completely not getting this. If all possible mind-histories are instantiated at least once, and their being instantiated at least once is all that matters, then how does anything we do matter?

If you became convinced that people had not just little checkmarks but little continuous dials representing their degree of existence (as measured by algorithmic complexity), how would that change your goals?”

Another one by Michael Vassar:

“I’m just incredibly skeptical of attempts to do moral reasoning by invoking exotic metaphysical considerations such as anthropics, even if one is confident that ultimately one will have to do so. Human rationality has enough trouble dealing with science. It’s nice that we seen to be able to do better than that, but THIS MUCH better? REALLY? I think that there are terribly strong biases towards deciding that “it all adds up to normality” involved here, even when it’s not clear what ‘normality’ means. When one doesn’t decide that, it seems that the tendency is to decide that it all adds up to some cliche, which seems VERY unlikely. I’m also not at all sure how certain we should be of a big universe, but personally I don’t feel very confident of it. I’d say it’s the way to bet, but not at what odds it remains the way to bet. I rarely find myself in practical situations where my actions would be different if I had some particular metaphysical belief rather than another, though it does come up and have some influence on e.g. my thoughts on vegetarianism.”

I have to say, most (consequentialist) ethical theories completely break down if you’re living in a Big World.  Which is very, very bad.

I don’t try to think about it too much, but I guess I have to solve this problem sooner or later.

583. Visualizing Eutopia

Yesterday I asked my esteemed co-blogger Robin what he would do with “unlimited power“, in order to reveal something of his character….overall he ran away from the question like a startled squirrel.

…For a long time, I too ran away from the question like a startled squirrel.  First I claimed that superintelligences would inevitably do what was right, relinquishing moral responsibility in toto.  After that, I propounded various schemes to shape a nice superintelligence, and let it decide what should be done with the world.

Not that there’s anything wrong with that.  Indeed, this is still the plan.  But it still meant that I, personally, was ducking the question.

Why?  Because I expected to fail at answering.

Human psychology is really fucked up. We don’t even know what we want.

…it seems to me that Robin asks too little of the future.  It’s all very well to plead that you are only forecasting, but if you display greater revulsion to the idea of a Friendly AI than to the idea of rapacious hardscrapple frontier folk…

I thought that Robin might be asking too little, due to not visualizing any future in enough detail.  Not the future but any future.  I’d hoped that if Robin had allowed himself to visualize his “perfect future” in more detail, rather than focusing on all the compromises he thinks he has to make, he might see that there were futures more desirable than the rapacious hardscrapple frontier folk.

It’s hard to see on an emotional level why a genie might be a good thing to have, if you haven’t acknowledged any wishes that need granting.

And for this reason Yudkowsky wrote the Fun Theory Sequence which I’ll begin to summarize in the next post or so.

Advertisements
This entry was posted in ethics, Lesswrong Zusammenfassungen, Many Worlds. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s