Robin Hanson and Eliezer Yudkowsky’s debate on the future of superintelligence is now a free e-book. Cool cover:

foom.jpg

The transcript of their in-person debate starts on p.431.  I conditionally agree with Robin: If a superintelligence came along, it would do so in a gradual and decentralized manner, not a sudden “foom” leading to a dramatic first-mover advantage.  Still, I’m surprised that Robin is so willing to grant the plausibility of superintelligence in the first place.

Yes, we can imagine someone so smart that he can make himself smarter, which in turn allows him to make himself smarter still, until he becomes so smart we lesser intelligences can’t even understand him anymore.  But there are two obvious reasons to yawn.

1. Observation is a better way to learn about the world than imagination.  And if you take a thorough look at actually existing creatures, it’s not clear that smarter creatures have any tendency to increase their intelligence.  This is obvious if you focus on standard IQ: High-IQ adults, like low-IQ adults, typically don’t get any smarter as time goes on.  Even high-IQ people who specifically devote their lives to the study of intelligence don’t seem to get smarter over time.  If they can’t do it, who can?

2. In the real-world, self-reinforcing processes eventually asymptote.  So even if smarter creatures were able to repeatedly increase their own intelligence, we should expect the incremental increases to get smaller and smaller over time, not skyrocket to infinity.

In the end, the superintelligence debate comes down to fallbacks.  Eliezer’s fallback seems to be, “This time it’s different.”  My fallback is, “I’ll believe it when I see it.”  When a prediction goes against everything I observe, I see no alternative.