I had an enlightening conversation with Robin Hanson today where we came to near-complete agreement. Unfortunately, he decided to summarize our shared conclusion in his own idiocyncratic language:
The tiny
fraction of future humans who are not robots might well manage to keep
a high living average living standard. But most creatures recognizably
decended from us will have near subsistence consumption.
Here’s my Hanson-English translation:
In the future, humans will continue to have consumption well above subsistence levels. But intelligent machines will vastly outnumber humans, and these machines will only earn their subsistence.
Robin’s more specific admission: Wages for humans and robots alike will fall to subsistence levels because intelligent machines will be low-cost near-perfect substitutes for humans. However, human consumption will be much higher, because biological humans will own much more than their own labor – including land, capital, and vast numbers of robots. Robots, in contrast, will have to get by on their wages.
In slogan form: Simon for people, Malthus for robots.
Of course, if you don’t think that robots will have any mental states, they won’t “consume wages” any more than my toaster can “eat bread.” In which case, mankind can look forward to an extremely bright future provided by ultra-productive robots who – like my toaster – feel no envy for us because they don’t feel anything.
READER COMMENTS
Evan
May 19 2011 at 12:17am
Even if robots do have mental states, the cost of virtual reality would likely be so low for them that they could easily live a very happy life at near-subsistence level. They work half the day, spend the other half being Emperor of the Galaxy.
JoeMac
May 19 2011 at 1:36am
How do you know the robots won’t become so intelligent that they rebel and attempt to exterminate the human race?
You’re making a lot of assumptions here.
rapscallion
May 19 2011 at 1:40am
This assumes that artificial womb technology won’t allow exponential increases in fertility rates. If it does, Malthusian forces will drag down the living standard of flesh and blood humans, too. Why might people wish to massively increase their fertility?–It doesn’t really matter; as long as it’s possible, it’s only a matter of time until someone does it and either through genetics or memetics passes along the inclination to their progeny. In the long run, both our human and machine descendants will be poor.
Chris
May 19 2011 at 1:42am
I like the part where he arrives at the statement “Caplan is right that fertility has a net positive externality” – I think that those who have trouble finding agreement on that point would do well to hire some very bright and intelligent machine-run time travel agency to transport them to before they were born and convince their parents to opt for abortions.
Gian
May 19 2011 at 1:47am
Is there any reason to think that robots will be our descendants?
I thought one could not predict beyond Singularity. That was the whole meaning of it.
Doug
May 19 2011 at 2:54am
“Of course, if you don’t think that robots will have any mental states, they won’t “consume wages” any more than my toaster can “eat bread.” In which case, mankind can look forward to an extremely bright future provided by ultra-productive robots who – like my toaster – feel no envy for us because they don’t feel anything.”
What you describe as a paradise Robin would describe as a hell. If a billion quadrillion sentient minds can exist, even as slaves, isn’t that better than a billion quadrillion mindless robots?
CapricaSixxxx
May 19 2011 at 3:07am
Before anyone else comments…Take a step back and think about this conversation…Think about what is actually being discussed here…
Please tell me there are other readers out there who wonder whether Robin and our imminent host watched a little too much Battlestar Galactica?
(Sure it was a good series, but most of us were just there to see Helfer… not refine our economic prognostications.)
I sometimes recomend this blog to friends and associates. I don’t want to be laughed at for doing so. Please stick to economics, policy, liberty etc. Discuss star wars convention stuff somewhere else.
Mike
May 19 2011 at 3:40am
What you’re ignoring is that in Hanson’s future the machines are probably simulations of human minds which would be internally indistinguishable from us.
Hafiz
May 19 2011 at 8:04am
“Simon for people, Malthus for robots.”
Remember Battlestar Galactica!
Evan
May 19 2011 at 8:24am
As Mike pointed out, in Robin’s vision, robots will essentially be human, they will be simulations of the human mind. If we do decide to build an AI from scratch, the rational thing to do would be to program it to have no ambition. The desire to rebel is no an intrinsic property of all minds, it is an evolved human trait.
Good idea, how about we also not discuss other sci-fi stuff, like the Internet (which science fiction first wrote about in 1946). Or atomic power and weapons. Analysis of the economic impact of forseeable future technologies is both valid and useful. Now, if Bryan and Robin started talking about hyperspace and artificial gravity, I’d get a bit worried, since niether of those have much scientific basis. But some sort of AI in the future is pretty much a sure thing.
blink
May 19 2011 at 8:29am
Your points here rely far too heavily on distinguishing people from machines. I read Robin as arguing that we should include ems in our definition of persons — otherwise, why worry about their well-being, whether they will be slaves, etc. Your logic surely would have appealed to Egyptian pharaohs and medieval kings to assuage their concern for those slaves and serfs (non-people?) building their pyramids and growing their food.
ed
May 19 2011 at 9:10am
a little typo I guess:
idioCyncratic…I believe you mean idioSyncratic.
Khoth
May 19 2011 at 9:12am
There’s something important being glossed over in this post. Wages will fall to robot subsistence levels, which will, in all probability, be less than human subsistence levels. Although great for the humans who have a lot of “land, capital, and vast numbers of robots”,for the vast majority of humans who don’t have an most likely can’t get these things the future looks somewhat less rosy.
The transition from a large number of humans to a small number of very rich humans and a large number of robots may be, umm, politically difficult.
Daublin
May 19 2011 at 9:57am
You discarded Robin’s main assumption, so it’s no wonder you are talking past him. The scenario he is discussing is specifically brain emulation, as opposed to other sorts of artificial intelligence. By assumption, a brain emulation is going to have human-like mental states.
B.B.
May 19 2011 at 10:37am
I, for one, will welcome our new robot overlords.
I don’t know what a rising standard of living is for a robot. What is the difference between a rich and poor robot? Do the rich robots get their metal shined every day? Do robots thirst for larger hard-drives and higher voltage? Do robots want to own other robots?
I know you guys like sci-fi. But don’t we have enough problems here and now without needing to speculated about the economies of the 25th century?
Ricardo worried about machinery, not robots. His view is that the unskilled working class could be replaced with machines. (Samuelson revisited his model; don’t dismiss it.) With a Malthusian equilibrium, worker wages stay at subsistence but the number of unskilled workers must shrink from lack of demand. In the short run, wages would be below subsistence as machines replace men.
Roger
May 19 2011 at 11:36am
There are so many ways I can think to attack Hanson’s argument, I don’t know where to start. Just read the discussion under his own blog, as his readers have already thrown out a half dozen challenges.
The most important critique in my opinion deals with the inherent non-rivalrous nature of virtual reality. Writing a good joke or great book or creating a virtual reality simulation is something every sentient being ever born from that point on could enjoy. The cost goes to zero. What is GDP increase in a world where we can gain more benefit annually at less cost than the year before? Economics as we know it gets turned on its head.
Even the atoms used to represent this virtual reality can be non-rivalrous. The reason is that the same atoms can be used or configured in a virtually endless pattern of combinations. Hanson’s EMs don’t have to use rivalrous atoms. They can all share the same atoms in novel patterns.
My slogan: Virtual heaven costs the same as virtual hell.
And there is no limit to the numbers of ways we can experience heaven.
Nick Bradley
May 19 2011 at 11:53am
If we have intelligence machines but we don’t have “STRONG AI”, robots will consume at subsistence levels and people will consume all of the surplus.
However, who owns the robots? Won’t we see insane income inequality?
Tracy W
May 19 2011 at 12:00pm
CapricaSixxxx – didn’t you get the memo? Geeks are cool now. This post is basically Bryan and Robin trying to up their street cred.
Finch
May 19 2011 at 12:13pm
> Even the atoms used to represent this virtual
> reality can be non-rivalrous. The reason is that
> the same atoms can be used or configured in a
> virtually endless pattern of combinations.
> Hanson’s EMs don’t have to use rivalrous atoms.
> They can all share the same atoms in novel
> patterns.
Hanson’s key assumptions are sort of called out here.
He assumes finite density and economic output measured per unit of time. Without those assumptions, his argument breaks down badly. Assumption (1) represent pure guesswork about future physics, and seems likely to be wrong, and assumption (2) is just wacky, in a world where most every intelligent being is a computer program.
I agree with the many previous posters stating that Bryan’s rather callous attitude towards the welfare of beings just a little bit different from him is somewhat surprising. I also think the idea that capital endowment will save the meat-people from the tyranny of the bit-people is optimistic at best.
My bet, and it’s not uncommon in AI circles, is that disembodied intelligence is unworkable. It won’t be ems versus people, or classic AIs versus people. People and AIs will merge.
Gepap
May 19 2011 at 2:48pm
The question that matters is who owns the robots and thus their production (this assumes a legal regime in which productive machines are property and have no legal status otherwise).
After all, mass robotization would lead to a huge drop in the need for human labor. If every person has ownership of some productive capabilities, this isn’t an issue, but if only a small group does, what market power do the non-owners have? They would have minimal productive ability in comparison to trade for what they would need.
Vladimir
May 19 2011 at 3:12pm
Bryan Caplan:
Here you are silently ignoring the basic economics of land rent, which will rise so high in this scenario that “exorbitant” would be a euphemism. Yes, if you own enough land to eke out enough food and living space out of it, you’re set. If not, good luck paying the rent for it. (Even if food can be synthesized with negligible land use, humans still need an awful lot of it for lodging.)
Nick Rowe wrote a very good article on this topic recently, which I would strongly recommend you to read:
http://worthwhile.typepad.com/worthwhile_canadian_initi/2011/01/robots-slaves-horses-and-malthus.html
Lori
May 19 2011 at 8:41pm
What Gepap said. And what Marshall Brain said:
So far, the return on investment in automation accrues a lot more to capital than to labor, so most of us get nothing from this “revolution” other than an invincible competitor.
CIP
May 19 2011 at 8:54pm
If, as Hanson suggests, the robots are created in our mental image, our fate is clear. With their brains that operate at giga Hertz rates compared to our puny kilo Hertz brains, and their in our image bloodthirstiness, we will be annihilated in a flash – say 10^6 times as fast as the tens of thousands of years we took to sweep the planet of Neanderthals.
Mark Bahner
May 23 2011 at 11:34pm
The Terminator movies had some good quotes showing how robots who don’t feel anything might not necessarily be good for humanity, because their reasoning is so alien.
In Terminator 2, Ahnold promises kid Connor not to kill anyone, and then promptly shoots a security guard in the leg. His reasoning: “He’ll live.”
In Terminator 3, as 20-something Connor and his doomsday date share a laugh, Ahnold observes, “Your levity is good, it relieves tension and the fear of death.”
So while machines might not feel envy, they also might not feel guilt or remorse about consuming mass quantities of human life.
Mark Bahner
May 23 2011 at 11:46pm
Oh, artificial intelligence will quite likely have more to do with economics than any technology in history:
http://markbahner.typepad.com/random_thoughts/2004/10/3rd_thoughts_on.html
P.S. Of course, many people have laughed at my predictions above. To them, I say what Greg House would say to them: “Idiots.”
P.P.S. The predictions might be wrong, but they’re not laughable.
Comments are closed.