Robin responds to my cryonic doubts as I expected: By changing the subject to hard science, where I grant that his knowledge vastly exceeds my own. 

Alas, as in past arguments, he doesn’t answer my fundamental complaint: There’s nothing in the physics textbook, or any other hard science source Robin can name, that even tries to bridge the gap between a bunch of neurons firing and the indubitable facts that I feel pain, think that Tolstoy was a great novelist, or love my children. 

His only response to these truisms – all of which are more certain than I could ever be about mere physics – is to insist that if I understood the physics textbook better, I would realize that it covertly discusses everything I can’t find in the index:

We have taken apart people like you Bryan, and seen what they are made
of.  We don’t understand the detailed significance of all signals your
brain cells send each other, but we are pretty sure that is all that is
going on in your head.  There is no mysterious other stuff there.

In short, “We’ve looked at your physical parts, failed to find pain, therefore pain is physical.”  This just begs the question.   Robin just can’t take seriously the logical truism that you can’t see pain through a microscope.  Unless you personally experience it, it’s inference, not observation – hence the “problem of other minds.”

In any case, suppose we took neurology as seriously as Robin claims to.  The right lesson to draw would be that thoughts and feelings require a biological brain.  There’s no way you can dissect a brain, then infer that a computer program that simulated that brain would really be conscious.  That makes about as much sense as dissecting a human brain, then inferring that a computer program that simulated a brain could, like a real brain, be poured over a fire to extinguish it.

Robin’s conclusion suggests a bet that I’ll take if the terms are right:

Consider: if your “common sense” had been better trained via a hard
science education, you’d be less likely to find this all “obviously”
wrong.

If Robin’s right, then teaching me more hard science will reduce my confidence in common sense and dualist philosophy of mind.  I dispute this.  While I don’t know the details that Robin thinks I ought to know, I don’t think that learning more details would predictably change my mind.  So here’s roughly the bet I would propose:

1. Robin tells me what to read.
2. I am honor-bound to report the effect on my confidence in my own position.
3. If my confidence goes down, I owe Robin the dollar value of the time he spent assembling my reading list.
4. If my confidence goes up, Robin owes me the dollar value of the time I spent reading the works on his list.

Since I’m a good Bayesian, Robin has a 50/50 chance of winning – though I’d be happy to make the stakes proportional to the magnitude of my probability revision.

With most people, admittedly, term #2 would require an unreasonably high level of trust.  But I don’t think Robin can make that objection.  We’re really good friends – so good, in fact, that he has seriously considered appointing me to enforce his cryonics contract!  If he’s willing to trust me with his immortality, he should trust me to honestly report the effect of his readings on my beliefs.

I don’t think Robin will take my bet.  Why not?  Because ultimately he knows that our disagreement is about priors, not scientific literacy.  Once he admits this, though, his own research implies that he should take seriously the fact that his position sounds ridiculous to lots of people – and drastically reduce his confidence in his own priors.