481. My Best and Worst Mistake; 482. Raised in Technophilia

The next posts are only about Yudkowsky’s “coming of age”. Not very important.

481. My Best and Worst Mistake

My youthful disbelief in a mathematics of general intelligence was simultaneously one of my all-time worst mistakes, and one of my all-time best mistakes.

Because I disbelieved that there could be any simple answers to intelligence, I went and I read up on cognitive psychology, functional neuroanatomy, computational neuroanatomy, evolutionary psychology, evolutionary biology, and more than one branch of Artificial Intelligence.  When I had what seemed like simple bright ideas, I didn’t stop there, or rush off to try and implement them, because I knew that even if they were true, even if they were necessary, they wouldn’t be sufficient: intelligence wasn’t supposed to be simple, it wasn’t supposed to have an answer that fit on a T-Shirt.  It was supposed to be a big puzzle with lots of pieces; and when you found one piece, you didn’t run off holding it high in triumph, you kept on looking.  Try to build a mind with a single missing piece, and it might be that nothing interesting would happen.

Here’s an excellent observation:

When I look back upon my past, I am struck by the number of semi-accidental successes, the number of times I did something right for the wrong reason.  From your perspective, you should chalk this up to the anthropic principle: if I’d fallen into a true dead end, you probably wouldn’t be hearing from me on this blog.  From my perspective it remains something of an embarrassment.

482. Raised in Technophilia

Ah, drug-regulation. A bastion of human insanity:

Today, Robin Hanson raised the topic of slow FDA approval of drugs approved in other countries.  Someone in the comments pointed out that Thalidomide was sold in 50 countries under 40 names, but that only a small amount was given away in the US, so that there were 10,000 malformed children born globally, but only 17 children in the US.

But how many people have died because of the slow approval in the US, of drugs more quickly approved in other countries—all the drugs that didn’t go wrong?  And I ask that question because it’s what you can try to collect statistics about—this says nothing about all the drugs that were never developed because the approval process is too long and costly.  According to this source, the FDA’s longer approval process prevents 5,000 casualties per year by screening off medications found to be harmful, and causes at least 20,000-120,000 casualties per year just by delaying approval of those beneficial medications that are still developed and eventually approved.

So there really is a reason to be allergic to people who go around saying, “Ah, but technology has risks as well as benefits”.  There’s a historical record showing over-conservativeness, the many silent deaths of regulation being outweighed by a few visible deaths of nonregulation.  If you’re really playing the middle, why not say, “Ah, but technology has benefits as well as risks”?

Sadly, I’m pretty sure that this state of affairs won’t change until we reach a positive (or negative) singularity because most medicine students, i.e. the guys who’ll later make our regulation-laws, are probably some of the most irrational, paternalistic, signaling-obsessed folks out there. Of course, the views of the general public are not better.

Anyway, Yudkowsky slowly realizes that, at least sometimes, the Luddits are right, especially regarding new technologies like nanotechnology or AI which pose existential threats:

What was the lens through which I filtered these teachings?  Hope. Optimism.  Looking forward to a brighter future.  That was the fundamental meaning of A Step Farther Out unto me, the lesson I took in contrast to the Sierra Club’s doom-and-gloom.  On one side was rationality and hope, the other, ignorance and despair.

Some teenagers think they’re immortal and ride motorcycles.  I was under no such illusion and quite reluctant to learn to drive, considering how unsafe those hurtling hunks of metal looked.  But there was something more important to me than my own life:  The Future.  And I acted as if that was immortal.  Lives could be lost, but not the Future.

And when I noticed that nanotechnology really was going to be a potentially extinction-level challenge?

The young Eliezer thought, explicitly, “Good heavens, how did I fail to notice this thing that should have been obvious?  I must have been too emotionally attached to the benefits I expected from the technology; I must have flinched away from the thought of human extinction.”

And then…

I didn’t declare a Halt, Melt, and Catch Fire.  I didn’t rethink all the conclusions that I’d developed with my prior attitude.  I just managed to integrate it into my worldview, somehow, with a minimum of propagated changes.  Old ideas and plans were challenged, but my mind found reasons to keep them.  There was no systemic breakdown, unfortunately.

 

Advertisements
This entry was posted in Lesswrong Zusammenfassungen. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s