After discussing a ten-year-old paper on empirical philosophy of mind, Robin proposes a remarkable bet:
I’m also pretty sure that while the “robot” in the study was rated low
on experience, that was because it was rated low on capacities like for
pain, pleasure, rage, desire, and personality. Ems, being more
articulate and expressive than most humans, could quickly convince most
biological humans that they act very much like creatures with such
capacities. You might claim that humans will all insist on rating
anything not made of biochemicals as all very low on all such
capacities, but that is not what we see in the above survey, nor what we
see in how people react to fictional robot characters, such as from
Westworld or Battlestar Galactica. When such characters act very much
like creatures with these key capacities, they are seen as creatures
that we should avoid hurting. I offer to bet $10,000 at even odds that
this is what we will see in an extended survey like the one above that
includes such characters. (emphasis mine)
Since Robin repeatedly mentioned my criticism of his work in this post, I sense this bet is aimed at me. While I commend him on his willingness to bet such a large sum, I decline. Why?
1. First and foremost, I don’t put much stock in any one academic paper, especially on a weird topic. Indeed, if the topic is weird enough, I expect the self-selection of the researchers will be severe, so I’d put little stock in the totality of their results.
2. Robin’s interpretation of the paper he discusses is unconvincing to me, so I don’t see much connection between the bet he proposes and his views on how humans would treat ems. How so? Unfortunately, we have so little common ground here I’d have to go through the post line-by-line just to get started.
3. Even if you could get people to say that “Ems are as human as you or me” on a survey, that’s probably a “far” answer that wouldn’t predict much about concrete behavior. Most people who verbally endorse vegetarianism don’t actually practice it. The same would hold for ems to an even stronger degree.
What would I bet on? I bet that no country on Earth with a current population over 10M grants will grant any AI the right to unilaterally quit its job. I also bet that the United States will not extend the 13th Amendment to AIs. (I’d make similar bets for other countries with analogous legal rules). Over what time frame? In principle, I’d be happy betting over a century, but as a practical matter, there’s no convenient way to implement that. So I suggest a bet where Robin pays me now, and I owe him if any of this comes to pass while we’re both still alive. I’m happy to raise the odds to compensate for the relatively short time frame.
READER COMMENTS
Joshua Fox
Jul 17 2017 at 11:38am
A related bet that Robin and I made.
Mark Bahner
Jul 17 2017 at 12:45pm
Excellent bet propositions. Another might be that no AI gets the right to vote. And the 13th amendment proposition covers this, but no AIs will be given a minimum wage. Or be permitted to run for office. And many, many more.
Matt C
Jul 17 2017 at 1:30pm
Ems are virtual, yes? I believe there is a significant difference between AI constructs that look human and those who merely act human. In both of Robin’s examples, people’s perceptions change when the AI is somehow exposed as non-human.
And, of course, his examples are both fictional. While the behavior of people in those shows towards AI is plausible, Bryan’s point #3 applies.
Tiago
Jul 17 2017 at 6:30pm
Though reasons 2 and 3 are legimate motives for you not to want to bet, reason 1 is actually a reason for you to bet Robin Hanson. If you think this particular study is biased, then you seem to believe that it is actually wrong, which would be made clear after a careful extended survey.
entirelyuseless
Jul 17 2017 at 10:36pm
“Even if you could get people to say that “Ems are as human as you or me” on a survey, that’s probably a “far” answer that wouldn’t predict much about concrete behavior.”
I’m pretty sure you’re wrong about this. The opposite is more likely. Plenty of people will say, “Ems are just computer programs and should be treated as computer programs,” but a small amount of interaction with a program that truly acted like a human would cause a human to treat the program as a human.
Mark Bahner
Jul 18 2017 at 12:17pm
To the point of passing a law that said AIs could not be subjected to “involuntary servitude”?
P.S. Here is where I think Robin is not seeing the obvious…which is that computer programs that act truly human (i.e. get distracted, need to sleep, get jealous, greedy, etc.) will never be more than an extremely rare phenomenon, because they simply wouldn’t be desired by carbon-based humans. Instead, the computer programs that are desired will never get distracted, need to sleep, get jealous, greedy, etc. So super-intelligent AI will come to fruition (within a few decades) but it won’t be human brain emulations (EMs).
Comments are closed.