Robin responds to my cryonic doubts as I expected: By changing the subject to hard science, where I grant that his knowledge vastly exceeds my own.
Alas, as in past arguments, he doesn’t answer my fundamental complaint: There’s nothing in the physics textbook, or any other hard science source Robin can name, that even tries to bridge the gap between a bunch of neurons firing and the indubitable facts that I feel pain, think that Tolstoy was a great novelist, or love my children.
His only response to these truisms – all of which are more certain than I could ever be about mere physics – is to insist that if I understood the physics textbook better, I would realize that it covertly discusses everything I can’t find in the index:
We have taken apart people like you Bryan, and seen what they are made
of. We don’t understand the detailed significance of all signals your
brain cells send each other, but we are pretty sure that is all that is
going on in your head. There is no mysterious other stuff there.
In short, “We’ve looked at your physical parts, failed to find pain, therefore pain is physical.” This just begs the question. Robin just can’t take seriously the logical truism that you can’t see pain through a microscope. Unless you personally experience it, it’s inference, not observation – hence the “problem of other minds.”
In any case, suppose we took neurology as seriously as Robin claims to. The right lesson to draw would be that thoughts and feelings require a biological brain. There’s no way you can dissect a brain, then infer that a computer program that simulated that brain would really be conscious. That makes about as much sense as dissecting a human brain, then inferring that a computer program that simulated a brain could, like a real brain, be poured over a fire to extinguish it.
Robin’s conclusion suggests a bet that I’ll take if the terms are right:
Consider: if your “common sense” had been better trained via a hard
science education, you’d be less likely to find this all “obviously”
wrong.
If Robin’s right, then teaching me more hard science will reduce my confidence in common sense and dualist philosophy of mind. I dispute this. While I don’t know the details that Robin thinks I ought to know, I don’t think that learning more details would predictably change my mind. So here’s roughly the bet I would propose:
1. Robin tells me what to read.
2. I am honor-bound to report the effect on my confidence in my own position.
3. If my confidence goes down, I owe Robin the dollar value of the time he spent assembling my reading list.
4. If my confidence goes up, Robin owes me the dollar value of the time I spent reading the works on his list.
Since I’m a good Bayesian, Robin has a 50/50 chance of winning – though I’d be happy to make the stakes proportional to the magnitude of my probability revision.
With most people, admittedly, term #2 would require an unreasonably high level of trust. But I don’t think Robin can make that objection. We’re really good friends – so good, in fact, that he has seriously considered appointing me to enforce his cryonics contract! If he’s willing to trust me with his immortality, he should trust me to honestly report the effect of his readings on my beliefs.
I don’t think Robin will take my bet. Why not? Because ultimately he knows that our disagreement is about priors, not scientific literacy. Once he admits this, though, his own research implies that he should take seriously the fact that his position sounds ridiculous to lots of people – and drastically reduce his confidence in his own priors.
READER COMMENTS
Andy McKenzie
Nov 30 2009 at 1:07am
Bryan: It’s not physics. It’s computational neuroscience.
Fenn
Nov 30 2009 at 1:29am
“these truisms – all of which are more certain than I could ever be about mere physics”
maybe start off reading: The Man Who Mistook His Wife For A Hat: And Other Clinical Tales
li’l too cocksure
Grant
Nov 30 2009 at 3:24am
Isn’t this all just re-stating a very old argument in philosophy? While interesting, I don’t think either party is bringing anything new to the table. We still don’t have a physical explanation of consciousness.
rkos
Nov 30 2009 at 6:41am
I would suggest you delve in Daniel C. Dennett’s works:
http://www.amazon.com/Consciousness-Explained-Daniel-C-Dennett/dp/0316180653
http://www.amazon.com/Sweet-Dreams-Philosophical-Obstacles-Consciousness/dp/0262042258
Peter
Nov 30 2009 at 6:47am
If he accepts the bet then either you automatically lose (according to Steven Landsberg ) or you aren’t really a bayesian.
Jason Malloy
Nov 30 2009 at 8:00am
Dr. Caplan, you aren’t thinking like a Bayesian or a scientist.
All things are conceivable. It’s conceivable, for example, that the Earth and the universe are 10,000 years old, and that an invisible red man with a pitchfork planted a bunch of fake dinosaur bones and toyed around with physics to trick us into believing that the universe is much older.
Another thing that is conceivable is that Dr. Caplan is the only “real” conscious person in the universe, and the rest of us are just clever wind-up toys with no private subjective states whatever.
But “conceivable” is a far cry from likely or logical.
All science is nested in the material framework, because it is the only useful and accurate framework for making predictions and inferences. There is a dense and proven network of scientific theory and practice that indicates your brain is a machine that produces your mind. And no serious alternative framework at all to suggest differently.
Dismissing the conscious experience of others who possess the same anatomy and behavior is completely analogous to claiming dinosaur bones are supernatural forgeries– an intentionally convoluted leap of faith, based on a much more complicated, unproven, and unspoken set of inferences.
I’m sure you could read every paleontology paper ever published and still cling to the Satanic Conspiracy position (since there are always gaps in understanding you can fill in with your own creative epicycles), but there would be no reasonable justification for your ideology other than intransigence.
RL
Nov 30 2009 at 8:38am
As one philosopher of science, John Searle, put it, thinking a computer simulating the function of the brain is actually conscious is like thinking a computer simulating the function of a stomach is actually digesting food.
Joshua
Nov 30 2009 at 8:59am
I would suggest “I Am A Strange Loop” by Douglas Hofstadter. Not that it’s such a great book, but he adequately depicts the category error of thought that results in stubborn dualism.
Take a step down the ladder of complexity, and scale. If you look at the atoms in an apple, you’ll see mostly water, plus sugars and fibers. Nothing that means apple-ness. At a larger scale it’s an apple. That doesn’t mean we failed to find a dual nature of appleness at a smaller scale. There’s nothing dual about the fact that the object is both an apple and a collection of ever-changing atoms. It’s just a poor, though persistent, way of thinking about the principles of organization.
As for the challenge, can Bryan think of no examples where stubborn common sense persists yet is definitely wrong? How about Yudhistera’s riddle: what is the greatest wonder of life? That every man must one day die, yet every man lives as if they were immortal.
Robin Hanson
Nov 30 2009 at 9:06am
Well I see these problems with your proposed bet:
Don’t think this version of the bet will work.
Daublin
Nov 30 2009 at 9:31am
Bryan, I don’t see where Robin is particularly appealing to hard science, and I’m not sure why you say he is.
Robin’s challenge to you is a philosophical one. Consider building a simulated being by replacing one part at a time from a human. First the arms and legs, then the major organs, then the neurons one by one. You claim that the simulated being obviously has no feelings. At what point in this process does that happen? Is it when the first artificial neuron goes in? The last? Or is feeling fractional, with the intermediate beings feeling in a halfway sort of way? So far you are just claiming that the final result has no feelings. Robin asks you to consider the intermediate states as well.
Personally, I believe we are leaping too quickly to the philosophical. For all *practical* purposes, there are going to be artificial beings that are just as human as anyone who wears a prosthetic. This practical issue deserves practical responses. It’s hard to make predictions about a wildly different society, but one thing seems clear from history. If we designate beings as non-human even though for all practical purposes they are, then there will be great misery and bloodshed. Perhaps we would do better if we made our philosophy serve us rather than the other way around.
guthrie
Nov 30 2009 at 11:53am
I think Grant has it right. We don’t know exactly what ‘consciousness’ is, and until (or unless) hard science and cryology can come up with a falsifiable definition, then the rest of us hoi-polloi will continue to be skeptical of cryology’s claims. I think Bryan’s just calling Robin (et al) on the seeming nonchalance with this concern. It may not be based in ‘hard science’ but that doesn’t mean it’s a concern that can be ignored.
Isn’t it reasonable to assume that the push to study cryonics to begin with is to mitigate the basic Fear of Death we all posses? If this is the goal, then isn’t it reasonable for those of us who wish to benefit from this research (ie. potential customers) to demand that such studies clearly address our concerns over ‘mind’ and ‘personhood’ regardless of how ‘scientific’ they are? Otherwise you’re selling a product that very few will want.
Silas Barta
Nov 30 2009 at 11:54am
Robin Hanson is overestimating what it would take to change Bryan Caplan’s mind, I think.
This piece alone should demonstrate why hard science can overturn a “solid” philosophical proof, and the link Hanson gave to the Eliezer Yudkowsky’s identity series gives an *excellent* summary of the argument (as to why identiy can’t be attached to specific particles), so Caplan will be able to go directly to the part the finds most implausible.
Curt
Nov 30 2009 at 12:28pm
It seems to me that the key lines of Bryan’s post are here: “There’s nothing in the physics textbook, or any other hard science source Robin can name, that even tries to bridge the gap between a bunch of neurons firing and the indubitable facts that I feel pain, think that Tolstoy was a great novelist, or love my children.”
One can grant that consciousness (feeling) is a purely physical process hosted (mainly) in the brain, but we don’t feel neurons firing – we feel pain or love. Brains operate in a body with a nervous system, with sense organs feeding in, with a complicated chemical system influenced by what we ingest, etc. Brains don’t operate in isolation.
Given that we can’t seem to prove that anyone other than ourselves is actually a feeling being (though it does seem preposterous to deny it), it’s hard yet to see how we will prove that a computerized simulation is ‘feeling’. And if it is feeling, then it is likely feeling something different than what we feel, as its inputs are surely different.
Ryan Vann
Nov 30 2009 at 12:34pm
In college, I had a friend who, when seemingly on the losing side of an argument, would invariably make these “my blue isn’t your blue” pot induced type of arguments.
They were never very convincing, but always good for a laugh.
Rolf Andreassen
Nov 30 2009 at 2:20pm
You do realise that a dualist theory of mind requires you to assert that energy is not conserved? Either nothing goes on in a brain that cannot be explained by plain physics, or else at some point something nonphysical makes an electron swerve, thereby violating observable conservation of momentum and energy. The reading list should therefore be very short: It consists of Newton’s laws of motion. Plus perhaps the experiments that show their validity on the subatomic scale.
Matthew C.
Nov 30 2009 at 6:54pm
There is a dense and proven network of scientific theory and practice that indicates your brain is a machine that produces your mind.
Actually the physicalist explanation of consciousness is sadly lacking, hence all the discussion about “the hard problem”. Not to mention all the experimental and field data that contradicts physicalism.
And no serious alternative framework at all to suggest differently.
Shades of the “consensus on global warming”, eh?
Actually the transmissive theory of consciousness explains the commonly accepted facts of neuroscience just as well as the productive theory of consciousness. Basically, materialism asserts that the brain is an ipod for the mind, while the transmissive theory holds that the brain is more like a radio that receives and interacts with mind (or perhaps Mind).
Christopher Rasch
Nov 30 2009 at 7:34pm
Brian, do you think an artificial neuron can work as well as a biological neuron?
Also, what would it take for you to sign up to be cryopreserved? After all, your alternatives are the ashes of the pyre or the worms of the grave, neither of which offer any hope of revival, whatever your theory of identity.
Michael Keenan
Nov 30 2009 at 8:34pm
Daublin wrote:
“Consider [replacing biological] neurons one by one [with artificial neurons that give the same outputs to the same inputs as their predecessors]. You claim that the simulated being obviously has no feelings. At what point in this process does that happen? Is it when the first artificial neuron goes in? The last? Or is feeling fractional, with the intermediate beings feeling in a halfway sort of way? So far you are just claiming that the final result has no feelings. Robin asks you to consider the intermediate states as well.”
Please answer this.
Craig
Dec 1 2009 at 1:38pm
Daublin, Michael: consider replacing biological neurons with plastic. It doesn’t replace the neuron functionally, it just fills the space.
The first replacement is not going to have any measurable effect on brain function, right? In fact you can replace _lots_ of neurons without creating any significant impairment, and it’s vanishingly unlikely that replacing one more randomly selected neuron will have a noticeable effect at any point in the process.
And yet, if you take the process to completion, the brain will be gone, and the person will be dead. (And of course you’ll get brain damage before then.) At some point these tiny unnoticeable changes added up to a qualitative shift. So is it really unimaginable that there is a qualitative shift in the other case, even if you’re having trouble seeing it?
jb
Dec 1 2009 at 4:25pm
if I were going to write a program that managed a robot, and one of the directives was that the robot should be self-preserving, I would do the following things:
1) I would cover the robot’s body in sensors for motion, touch, heat, etc
2) I would set up the sensors so that if they experienced various “levels” of sensory input, they would report back to the central processor(s) so the robot could, for example, avoid stepping on a hot area, or a sharp piece of glass, etc. Now, it’s possible that there would be more input than the CPUs could handle, and I would simply throw away whatever data was least important.
3) I would make it so these reports would escalate in priority if the sensory input was at sufficiently high levels – because I would want to be sure that the central processor got this very important data.
And so if the robot stepped on, say, a lava flow, immediately sensors would start shooting out tons of messages from the foot region, indicating that the temperature readings were far too high for tolerance. The priority level on these messages would be so high that the CPU would basically receive nothing but “pain alerts”, making it impossible for the robot to do much of anything except attempt to identify ways to reduce the number of pain alerts.
When humans are conceived, they start from strands of DNA, which are not thinking. The DNA is essentially a self-modifying computer program that has an extremely sophisticated, very precise set of manufacturing rules. DNA has to replicate following a certain sequence, or it doesn’t work. And if you introduce foreign chemicals into the mix, sometimes the parts fail to run – which is why Thalidomide babies had no arms.
If you accept that the replication of DNA to create humans is a mechanical process, then the result of that process – a born human, is an organic ‘robot’ with a powerful central processor, a body-wide web of sensors, and a strong desire for self-preservation.
Jess
Dec 1 2009 at 9:58pm
There is very little scientific going on here. Not that I’m against philosophy, but modern audiences will remain unconvinced without experiments and working prototypes. (At least those who’ve thought a bit about this whole AGW fiasco will remain so!) When I can chat with a technologically-realized duplication of a human mind, THEN I’ll start worrying about whether said duplication REALLY “is” the flesh-and-bone human subject of the process. The cart is miles ahead of the horse here.
I’m sympathetic to the cryonic true believers, since like them my background is in math and CS, I accept the Church-Turing Thesis, and I suspect that all vital processes of consciousness are computations in some sense. Unlike many of them, I’m not willing to assume away numerous questions about mind, experience, memory, emotion, identity, etc. No hocus-pocus please. If Robin’s “please devote 8+ years of life as an impressionable youngster to studying exactly the books I assign” isn’t a risible appeal to authority, what is?
There are other objections to the whole scheme. Say that 200 years from now it will be possible to instantiate a mind in hardware, and by some miracle no one has tripped over the freezer cord that whole time. Who will go through the trouble of reviving Ted Williams, Robin Hanson, and the rest? It’s not like these “people” will have any marketable skills.
I’d prefer to face eternity with a smidgen of dignity.
Patri Friedman
Dec 4 2009 at 12:59am
Robin is right. I would state it more strongly: any dualistic philosophy is equivalent to postulating a metaphysical soul, which is equivalent to believing in magic, fairies, witches, God, and other supernatural beings. If you believe in God, then sure, feel free to believe that biological brains are special.
But for an atheist to believe that there is any “secret sauce” which couldn’t be replicated by scanning and simulation is just not logically coherent. It’s about as likely as increasing the minimum wage causing an increase in employment. Maybe even less likely – I have an easier time making up contorted scenarios for minimum wage increases to increase employment than for brain simulation to not be every bit as real as a biological rain.
Comments are closed.