Robin Hanson and Eliezer Yudkowsky’s debate on the future of superintelligence is now a free e-book. Cool cover:
The transcript of their in-person debate starts on p.431. I conditionally agree with Robin: If a superintelligence came along, it would do so in a gradual and decentralized manner, not a sudden “foom” leading to a dramatic first-mover advantage. Still, I’m surprised that Robin is so willing to grant the plausibility of superintelligence in the first place.
Yes, we can imagine someone so smart that he can make himself smarter, which in turn allows him to make himself smarter still, until he becomes so smart we lesser intelligences can’t even understand him anymore. But there are two obvious reasons to yawn.
1. Observation is a better way to learn about the world than imagination. And if you take a thorough look at actually existing creatures, it’s not clear that smarter creatures have any tendency to increase their intelligence. This is obvious if you focus on standard IQ: High-IQ adults, like low-IQ adults, typically don’t get any smarter as time goes on. Even high-IQ people who specifically devote their lives to the study of intelligence don’t seem to get smarter over time. If they can’t do it, who can?
2. In the real-world, self-reinforcing processes eventually asymptote. So even if smarter creatures were able to repeatedly increase their own intelligence, we should expect the incremental increases to get smaller and smaller over time, not skyrocket to infinity.
In the end, the superintelligence debate comes down to fallbacks. Eliezer’s fallback seems to be, “This time it’s different.” My fallback is, “I’ll believe it when I see it.” When a prediction goes against everything I observe, I see no alternative.
READER COMMENTS
Thomas DeMeo
Sep 5 2013 at 11:54am
I would imagine that some form of super-capacity would occur first, marrying our current intellect with a rapidly increasing ability to delegate to technology and accomplish tasks.
Carl
Sep 5 2013 at 11:56am
They also don’t have the technology to genetically engineer themselves to increase intelligence, whereas we do have the capability to rewrite or add to software and improve it today. But they are more likely to select mates or sperm and egg donors for increased intelligence, to protect their children from lead poisoning or iodine deficiency, and so on. And they do enhance their learned skills and capabilities over time.
If we develop human-level artificial intelligence, then we will have the capabilities to produce increased intelligence through R&D (that’s how we get there), as software advances today. Reaching superintelligence is then just a matter of continuing a bit further (bolstered by cheaper faster AI labor).
Yes, but often far beyond the capabilities of the closest competitor in the biological world, e.g. nuclear weapons, interplanetary spacecraft, encryption, fiber optic cable, pastoralism, agriculture, radar…
Just optimizing parameters that matter in the human range puts the limits in the superhuman range. Human intelligence varies with brain size and neurons devoted to particular tasks: multiplying neuron counts 10 times would be way past the range of observed variation. There are extensive genetic variations among humans in intelligence, and no one has all of the intelligence-enhancing alleles, but that would put someone well past the smartest observed humans.
Then add in the advantages of being digital with fast computers: thinking orders of magnitude faster than humans, making copies (average ability near maximum ability), and so on. We know that such speedups are possible without architectural improvements over humans.
Cryptomys
Sep 5 2013 at 12:17pm
We have already reached superintelligence in a limited sense. Readers should be aware that fairly inexpensive chess software running on a PC has reached a superhuman performance rating of above 3000. For reference, the greatest human grandmasters, such as Bobby Fischer or Garry Kasparov were around 2800.
What would a chess rating of 10000 or an IQ of 500 even mean? I may never run into a person with an IQ of 500, but I wouldn’t be surprised to be able to find a 10000 level chess program in a few years.
Thomas DeMeo
Sep 5 2013 at 12:34pm
I would imagine that some form of super-capacity would occur first, marrying our current intellect with a rapidly increasing ability to delegate to technology and accomplish tasks.
Romeo Stevens
Sep 5 2013 at 12:36pm
Pencil and paper, computers, organizational competence, etc, count as humans enhancing their own intelligence. And we do see smart individuals getting smarter over time, not in terms of IQ but in terms of their optimization power over futures they desire.
J Storrs Hall
Sep 5 2013 at 1:23pm
Should you care to read a published, peer-reviewed paper on the subject, consider this one.
jsalvatier
Sep 5 2013 at 1:33pm
These are two uncharacteristically silly points.
1. The reason is that high IQ people do not have a high leverage way to modify the way their brains work, while a machine intelligence does if they have access to their source code.
2. Just because the process of recursive self improvement will stop eventually doesn’t mean it will stop at any value you or I would think of as reasonable. Humans intelligence appears to be massively inefficient, so its quite easy to come up with relatively simple ways we could improve on human intelligence. For example, running at 100x the speed, or have 100 times the working memory. Thus, we should expect large improvements once we can actually modify these things.
Luke
Sep 5 2013 at 4:48pm
Maybe this is a Science vs. Bayes issue? E.g. would you have said “I’ll believe it when I see it” about the world’s first atom bomb, or would you have fled Hiroshima?
Daublin
Sep 5 2013 at 5:37pm
People do improve their intelligence!
For IQ in particular, I believe that people who study IQ tests get better at taking them, and thus increase their nominal IQ. I’ll yield to the data if someone can prove otherwise.
That’s just narrowly IQ, though. For intelligence more broadly, people do learn, for example, how to study better. They learn to rehearse music more effectiently. The learn how to listen to cues from a conversation partner. These are all things that make a person more intelligent, for different spheres of intelligence.
It so happens that the things in the previous paragraph do not involve increasing the basic computation speed that people are capable of. People do that, too, however. They do so every time they change their diet to be less foggy headed, or change their pattern of drinking alcohol.
MingoV
Sep 5 2013 at 8:46pm
Neurologically, it is possible for an intelligent two-year-old to six-year-old to get more intelligent. The brain grows more neurons and makes more inter-neuron connections during that period. Maximizing learning and application of knowledge during that period will raise IQ. After that period, the brain still can make connections, but at a greatly reduced pace.
Super-intelligence (without computer implants) will require genetic changes either through selective breeding or gene engineering. The latter is a long way off because few genes related to intelligence have been discovered, and no one knows how to change them to improve intelligence instead of reducing it.
@Daublin: Practice helps very little in IQ tests of intelligent people. If you cannot understand a principle, practicing by reading questions about the principle will not help.
Rob
Sep 5 2013 at 10:07pm
Maybe he should go to bed and sleep on it. From the inside, overconfidence can look a lot like obviousness.
Ken P
Sep 5 2013 at 10:37pm
Looks like I just got 700 pages added to my reading list.
1) Observation is a better way to learn about the world than imagination.
Looking backwards has it’s problems. Comparing intelligence to IQ tests has problems, too. An infant may have high IQs, but an adult is way beyond in capability to reach goals and solve problems. That growth comes from intake of percepts and self-awareness. I find that the older I get, the more problems I can solve. The amount of data to feed computers is growing exponentially.
Ben Goertzel has some good intuitions that could lead to computers understanding.
2. In the real-world, self-reinforcing processes eventually asymptote.
I think everything asymptotes in the mind of most economists. There seems to be a natural tendency to assume a tendency towards stagnation and a discounting of the potential for innovation.
That said, I would agree with the statement. I just don’t think it applies here. I would expect a workable approach to be constantly seeking out new intellectual territory.
I like that Robin keeps pushing the emulate vs hand coded argument. It’s sort of an economist vs strictly analytical programmer argument. I’m hoping they expand as the book goes on, so I can see their visions for role of structure vs entropy.
Facial recognition advances suggest we are making progress.
Eelco Hoogendoorn
Sep 6 2013 at 4:35am
To whom, exactly? If you are not blown away by the things our brain does with the watts its given, you havnt really thought about it.
It also suggests you are the kind of person who buys the notion that a neuron is basically a large and slow transistor.
Come back to me with a transistor that manipulates a number of switches at the single molecule level larger than the number of transistors in your computer, then maybe you are on to something.
I bet that superhuman intelligence will sooner come from a genetically engineered supersized brain in a vat, than from any kind of transistor.
Either way, one will be running into all kinds of physical limits in trying to surpass the human mind, pretty rapidly, since it is already doing its thing in a pretty clever way at the single molecule level, at an energy efficiency that silicon can only dream of. And for that reason I agree with Bryan; while superhuman intelligence will be a historic moment, there wont be much of a singularity.
NZ
Sep 6 2013 at 11:27am
Evolutionarily, average intelligence is evidently optimally suited for our environment. Our approximately 5-second locus of attention, for example, is ideal for the tasks we typically perform. A longer working memory would be just as much of a hindrance as a shorter one.
Designers, by the way, are tasked with designing for people as they are, not for people as they could be. So don’t expect human evolution to be driven by the emergence of more powerful technology.
Add to this the fact that higher IQ people don’t have as many kids–this isn’t a new pattern linked to modern forms of birth control either. At least as far back as Jonathan Swift it’s been commonly observed that higher IQ people have a lower TFR.
Higher IQ people don’t report being happier either.
How much of the Flynn Effect, meanwhile, is just a result of improved nutrition and a decrease in our average proximity to toxic chemicals like lead? Can’t we expect the Flynn effect to taper off as it hits diminishing returns?
By the way, if I’ve personally met ten people with IQs above 150, I’d prefer to never again be in the same room with 8 or 9 of them. This is true regardless whether you consider their intelligence in the traditional way or in the Howard Gardner “multiple intelligences” way. Most people I know would agree. What does that say about the advantages of superintelligence?
Mark Bahner
Sep 7 2013 at 12:32am
If the trend in computing energy efficiency that has held steady from 1945 to 2010 continues for about another decade, silicon computers will be approximately the same energy efficiency as a human brain (i.e. roughly 100 watts for roughtly 1 quadrillion operations per second).
“The computing trend that will change everything.”
Mark Bahner
Sep 7 2013 at 12:41am
Hi,
Oops. My bad. That graph was instructions per kWh. Per the graph and the text accompanying the graph, it will take more like 2-3 decades continuing the trend before silicon would reach the human brain in energy effiency.
Mark
Herbert Caron
Sep 7 2013 at 4:15am
We humans are remarkably intelligent creatures, but our thinking is inadequate in addressing the question of producing a still more intelligent creature. It is easy to identify features such as greater speed of processing, or greater memory, (and on a physiological basis) more or better neurons. Even H. Gardner’s analysis of IQ into its “components” does not address the thinking process itself. Based on studies of “thinking”, here are a few suggested improvements.
To think effectively, one must retrieve the most relevant, pertinent items from among millions of items stored in the brain’s memory banks, and failing to retrieve even one essential, pertinent item, can result in weak (or wrong) conclusion. No one knows how we effect these amazing retrievals, but we all know people who seem to excel at pertinent retrieval, and others who are poor retrievers. Excellence in retrieval is just ONE feature of intelligence that can be improved. I can name dozens, but here is one more function that could be improved greatly: Presume you are trying to “think thru” the problem of what action would best serve American interests in today’s Syria. And let’s say you have efficiently retrieved several pertinent items. Now one employs a variety of judgement skills in comparing the items and drawing maximum information from them, while also comparing ones results, step by step, with multiple features of the background situation (such as how will Iran and Russia react, how will the action effect the decision by Israel to attack Iran’s nuclear facilities, etc.) Exquisite judgments are used, even by people of average intelligence, but amazing combinations of logical skills are employed by the most intelligent persons who quickly see and apply syllogisms, moving stepwise up a ladder of inference. Excellence is daunting and hardly ever examined, but the ability to compare various options, and to judge correctly, is intrinsic to INTELLIGENCE. It is along such lines that we can begin to address the question of the improvement of human intelligence. That is, we must examine what we do when we think intelligently. (AI is too easily addressed via dimensions of simple SPEED and STORAGE CAPACITY).
Eelco Hoogendoorn
Sep 7 2013 at 1:08pm
@Mark
Those three decades will require some paradigm shifts which arnt on the horizon yet, as far as I know. Most of the gains in energy efficiency have been realized by downsizing; an area in which we are currently running into physical limits.
Fundamentally, toggeling a switch by phosphorylating a small molecule is simply quite hard to beat, in terms of energy efficiency. The whole operation involves only a single electron changing configurations; quite the leap from the torrent of electrons rushing through your transistor.
aretae
Sep 9 2013 at 5:17pm
The question of what high IQ is good for is an interesting one. And the answer is not thoroughly obvious to me. I think that the question of “what do you expect that an entity should be able to do with a better IQ, and why?” needs more attention.
I think that most of the high-IQ people I know substantially over-estimate the value of a high IQ.
Mark Bahner
Sep 9 2013 at 10:25pm
If machines become equal to human intelligence, it’s much easier to imagine a planet with a trillion machines than with a trillion human beings. For example, it would be easy to imagine machines packed in 100 times more densely than humans in the average U.S. home.
Mark Bahner
Sep 9 2013 at 10:45pm
Those three decades will require some paradigm shifts which arnt on the horizon yet, as far as I know. Most of the gains in energy efficiency have been realized by downsizing; an area in which we are currently running into physical limits.
One potential paradigm shift is reversible computing.
Reversible computing
Stephen R. Diamond
Sep 11 2013 at 6:31pm
It’s interesting that the OP’s critics are divided in making one or the other claim:
Camp1:
OR
Camp 2:
I suggest that Camp 2’s claim that Bryan Caplan’s argument is trivial and even silly suggests—in light of the flourishing of Camp 1—that they haven’t taken the argument sufficiently seriously. Caplan’s argument refutes Camp 1, and Camp 1 (it seems to me) better states the reasons (or some important reasons) super-intelligence is widely believed to be inevitable.
Comments are closed.