Monday I debated Robin Hanson for the Soho Forum on the following resolution:
“Robots will eventually dominate the world and eliminate human abilities to earn wages.”
Video will be available eventually, but you can enjoy my slides (in pdf format) now.
The main surprise for me: To my eyes, Robin initially (and uncharacteristically) ran away from his thesis by embracing a very weak sense of the word “dominate.” Here are Merriam-Webster’s definitions:
to have control of or power over (someone or something)
to be the most important part of (something)
to be much more powerful or successful than others in a game, competition, etc.
Robin appealed to something like definition #2. When challenged, he bit two bullets. First, he said that tractors already “dominate” in agriculture. Second, he denied that Mark Zuckerberg “dominates” Facebook. This is especially odd because, at least in his Age of Em, robots dominate by all three definitions. Indeed, as he eventually admitted in the debate, Robin thinks there’s a 30% chance the ems exterminate mankind within a year of their creation, in line with my argument here. Now that’s domination in its most horrifying form.
Despite my stark disagreement with Robin, it was a delightful debate. One of my main debate maxims is, “Talk to your opponent like he’s your best friend.” This is especially easy when my opponent is my best friend! Right or wrong, Robin’s a genius and a joy.
P.S. Don’t miss the Chronicle of Higher Education‘s cover story on Robin!
READER COMMENTS
BZ
Oct 19 2016 at 5:00pm
I liked the point about breeding docility in animals, machines, and AI. Mostly because I’m an Asimov fan, and well, the 3 Laws were a good idea.
Robin Hanson
Oct 19 2016 at 5:05pm
“Robin thinks there’s a 30% chance the ems exterminate mankind within a year of their creation.” I intended to say at most, not at least. Not sure what I actually said.
Anonymous
Oct 19 2016 at 5:43pm
Much of this sounds more plausible if we consider the scenario where AI is developed by gradually accumulating more better software, rather than the em scenario. I’m interested to see whether Robin agrees any more when that’s the scenario in question – whether for example he thinks that handwritten AI is more likely to be docile, or allow flesh humans to remain in control.
Mark Bahner
Oct 19 2016 at 6:14pm
Hi Bryan,
Thanks so much for posting your slides. I really look forward to seeing the video.
Here are the arguments I would have made against your slides (given time to think):
1) Your analogy is flawed, in that domestic animals are very different from computers. Domestic animals have physical limits that computers/robots don’t have.
For example, with computers, we can shrink transistors, we can build neuromorphic chips (rather than, or in addition to, chips that have von Neumann architecture), or we can even develop quantum computers…but we can’t use breeding to shrink brain cells, or change the architecture of the brain in such a way as to include von Neumann architecture, or somehow produce quantum effects that aren’t in animal brains.
We can also build robots with arms and hands that have opposable thumbs (or they can build themselves with arms and hands that have opposable thumbs!) but we can’t breed cows or dogs with arms and hands that have opposable thumbs (or at least not in a thousand years).
2) Therefore, when you disparage the resolution as being as unlikely to come to pass as your analogy, you’re not recognizing why your analogy isn’t applicable to computers/robots.
3) You say there are limits to how much computers/robots can improve, but where is your evidence for that? At the rate microprocessors are improving, by the mid-2020s, a $1000 worth of microprocessing power will be capable of approximately 20 quadrillion instructions per second…roughly the same number of instructions per second that a human brain can perform. And if the trend continued to the mid-2030s, $1000 worth of computer would be capable of as many instructions per second as 1000 human brains. Or to put it another way, $1 of computer power will be able to perform as many calculations per second as a human brain by the mid-2030s.
Instructions per second per $1000
Do you think that trend is going to suddenly stop, such that $1000 of microprocessor will not be able to perform as many calculations as the human brain for many decades (or forever)?
And with regards to robots (separate from computers)…could a cow ever be bred to do this?
Cows catching bottles? đ
Where do you see evidence that the rate of progress in computers or robots is decreasing?
Søren E
Oct 20 2016 at 7:02am
Note the word “Eventually” in the resolution. It requires strong arguments to argue that there exist a large class of tasks where humans will never be substantially surpassed by computers/robots.
Currently, computer hardware is improving exponentially on most (all?) dimensions. Computer software is also improving rapidly. I think it is a reasonable assumption that this correlates with success in the work-place.
I agree with you that âNo one has written software to do Xâ doesnât prove âSoftware to do X will never be written.â By Clarke’s first law, I think it is very probably wrong.
Todd Kreider
Oct 20 2016 at 12:40pm
I remember Robin Hanson said on a Singularity 1 on 1 interview that he thought there was a 30% probability that the em world would appear within this century. I noted that since every so often someone asks me what the liklihood of my 1989 version of a Singularity would occur (that is, computers so powerful around 2040 to 2060 that a person from 1989 wouldn’t recognize the world, and strong A.I wouldn’t necessarily be a pre-condition). I’d say that I don’t think I can put a probability on it “but OK, 50% and a much higher percent that those decades would look pretty strange to Mr. 1989 even if no Singularity.”
I agree with Mark Bahner above and will wait for the video to see what Bryan Caplan meant on his slide that robots will never dominate anything. Robots/computers already dominate chess, go and Jeopardy and are improving every year. They are not allowed to compete in most tournaments since so good.
I’d say computers will dominate translation within a few years. My 1996 prediction was that for French, Spanish and German into English, computers would be stunningly good using the internet and statistical machine translation and by 2009, translators would start losing their jobs. They did start to, but I’d say not enough yet to make that prediction correct. Wages have dropped 20% since 2009, though. (2009 was the first year I read in Translation Jnl. that a translator said he lost a $5,000 job after being told the company decided to go with MT.)
I added Japanese and Korean into English would be the same but 5 years later and in 2002 settled on “By 2015, translators will become editors apart from literature translation.” That is off but in 2010 I said “No translator (non literature) gets past the year 2020 when a titanium wall of computer power/MT goes up”
Four year left…
It is common in J/E translation for translators to say “never”, “not in hundreds of years” or conservatively “not before 2050.” They just don’t get increasing exponential computer power.
Maurizio
Oct 28 2016 at 2:57am
Todd, if you don’t have true understanding of the meaning of the sentence, a good translation cannot be done. For example, take the sentence “if the baby does not drink the milk, boil it”. To translate it, the computer needs to understand, among other things, that “it” refers to the milk, not to the baby. For this you need to have knowledge of the world. And to translate a generic sentence, you need to have complete knowledge of the world, and the ability to use this knowledge to produce inferences (i.e. reasoning), so as to understand the true meaning of the sentence. Nothing like that is done by current systems, which use statistical translation. So you will see the quality of computer translations suddenly stop improving before they can be considered acceptable. So translator won’t lose their jobs, until computer translation totally changes technique. But this will not happen soon, it will take many decades, possibly centuries.
Comments are closed.