In The Age of Em, Robin Hanson tries to sidestep philosophical questions, especially, “Would your Artificial Intelligences (AIs) actually be conscious?”  But if you seek to evaluate the world of the future, everything revolves around this infamous philosophical challenge.  Suppose AIs perfectly simulate thoughts and feelings even though they experience neither.  If so, claims about their “welfare” are nonsense.  Giving up human welfare to “make AIs better-off” is nonsense on stilts.  And valuing a world with trillions of AIs and zero humans over the status quo could well be the greatest normative error ever made.

Faced with this hypothetical, most AI optimists reply, “Sure, if AIs were dead inside, you’d be right.  But that’s a big if.  Why should we even take the idea seriously, much less believe it?”  Rather than rehash the p-zombie literature, let’s start from first principles.

1. No one literally observes anyone’s thoughts and feelings but his own.  I can see my own happiness via introspection – even with my eyes closed.  When I look at other people, I can see smiling faces, but never actually see their happiness.

2. This gives rise to the classic Problem of Other Minds.  How, if at all, can you justify your belief that anything besides yourself has thoughts and feelings?  Since you never observe others’ thoughts and feelings directly, you can only know them by inference.

3. This is no trivial inference, because the world is full of things that suggest thoughts and feelings even though virtually everyone is virtually sure they don’t have thoughts and feelings.  Take a diary.  It claims to have thoughts.  It claims to have feelings.  But that’s not enough to convince us it has thoughts and feelings.  In fact, there’s nothing a book could say that would convince us it’s conscious.

4. The same goes for a long list of things: Movies, t.v. shows, tape recordings, audio files, runes carved on stone.  Whatever they show, whatever they say, you’ll interpret as mere appearance of consciousness, not the real deal.

5. The same applies for things that provide contingent output: Choose Your Own Adventure novels, single-player videogames, Ouija boards, thermostats.  When you increase the difficulty level in a Chess game, you don’t wonder if the computer or program will start to think thoughts and feel emotions.

6. Many of the things we’re virtually sure aren’t conscious express more complex thoughts and feelings than the typical human.  Like War and Peace.  Many of the things we’re virtually sure are conscious express far simpler thoughts and feelings than the typical human.  Like a mouse.

7. The vast majority of things that never express thoughts and feelings don’t seem conscious to us.  Even things that look visibly human, like people in long-term comas.  If we learned that a monk had once
taken a vow of silence, however, his subsequent failure to speak would not make us doubt his continued consciousness.  If he publicly took a vow of expressionlessness, similarly, his subsequent lack of facial expressions would barely change our minds about his inner states.

8. At this point, there are two ways to swiftly and permanently end contemplation of the Problem of Other Minds.  The first is solipsism – to say either, “I have no idea if anything other than myself is conscious” or even “I alone am conscious.”  The second is materialism – to say either, “I have no idea if anything, including myself, is conscious” or even “Nothing is conscious.”  Both views are so absurd there’s little point arguing with convinced adherents.

9. Any sensible position on the Problem of Other Minds, then, must begin with a clear affirmation that (a) you are definitely conscious, combined with (b) some way of inferring that other things are conscious, even though (c) many unconscious things closely mimic conscious things.

Most of the AI enthusiasts I’ve encountered think the Problem of Other Minds is simple: If it quacks like a duck, it’s a duck; if a machine acts like it has thoughts and feelings, it has thoughts and feelings.  But how can this simple solution be right when the world is already full of duck calls?

In the near future, I’ll offer my solution to the Problem of Other Minds – a solution that strongly suggests AIs are no more conscious than Choose Your Own Adventure Novels.  Maybe I’m wrong.  But I’m not wrong to claim AI fans’ impatient responses to the Problem of Other Minds need a lot more curiosity and a lot more work.