Zombies and Other Minds

1 July, 2007

This week we enjoyed a splendid Big Ideas session on consciousness, and particularly on the question of whether consciousness can be evoked by software. I have no background at all in the philosophy of mind, so this really got me thinking.

By instinct I’m a sort of physical reductionist about lots of things, including the mind. As far as I know, nothing important has ever been found that doesn’t have a plausible physical explanation. Neuroscience appears, at least to an outsider, to provide strong evidence of a physical origin for the mind, and the mental effects of brain damage are a striking example of that. In any case, dualism offends against perspicuity, and it’s a lot of ontological machinery to swallow for just one problem. In other words, I think mind-body dualism uses a sledgehammer to crack a nut.

That said, I’m not what I’d call a “discursive” reductionist — someone who thinks that ordinary talk of the mind as something separate from the body is harmful. In fact I suspect that talking about ideas or emotions in purely neurological language would be extremely unwieldy even if we were able to do it. So I don’t object to dualistic talk on the understanding that, if we needed to do so, all such talk could in theory be reworded in physical terms.

All that might suggest a “hard AI” position; since the physical processes that produce consciousness can be modeled in software, they can produce consciousness there, too. But as it stands this is a category mistake. We can very easily model the motion of projectiles using a computer, for example, but that doesn’t mean I can use it to fire a cannonball. All I have in software is a simulation of a projectile, not the real thing. Perhaps consciousness is like that, or perhaps not. Thinking sometimes does feel a bit like running a computer program, and software can do things that look intelligent, like learn how to play chess well. We don’t know enough about it to be sure.

One thing that makes this hard is that we’d need some way of knowing whether a machine we’d programmed was conscious or not. But I don’t even have a way of telling whether the person sitting opposite me is conscious; all I can do is infer that consciousness from the fact that they seem very like me in many other ways. In other words, I think we believe that other humans are conscious in part because they’re the same species as us, and we know we share more of our important macro-level features.

I think my cat might be conscious because it shares many — although by no means all — of my important features. I don’t think the animated Mickey Mouse is conscious, though. He behaves a lot more like I do than like my cats, but physically all Mickey Mouse is is many thousands (millions?) of drawings on pieces of paper. As such he’s not much like me physically; much less so than my cat. For that reason I disregard his seemingly conscious behaviour as an illusion. In this case, of course, we know it’s a deliberate illusion, which makes it easier to decide.

So where does that leave us with a conscious computer? As a very intricately-shaped stone, a computer is physically extremely unlike me. It isn’t alive, or made of the same matter, nor did it come about in a similar way. I’d be inclined not to believe it was conscious. It’s at the Mickey Mouse end of the scale, not the end I and my cat occupy. I suppose this is probably a multi-dimensional scale — with dimensions like “species-similarity”, “behaviour-similarity”, “likelihood of deliberate deception” and so on — but I trust you get the point.

Given that, what would convince me it was conscious? Or, which is a similar question, what would convince me that a zombie wasn’t conscious? If it looks like a duck and quacks, roughly, like a duck, I’m inclined to believe it is one. A zombie would have to behave egregiously non-consciously before I’d be inclined to believe it wasn’t conscious, and I think a machine would have to exhibit some kind of extra-conscious behaviour to convince us it was really conscious.

That’s a problem because, while I can imagine things being less conscious than I am, I can’t imagine anything more so. What would it do that would be different from the things I do? How would I know it was extra-conscious? So although I have no idea whether a machine could be conscious, I’m quite sure it couldn’t be convincingly conscious. I certainly don’t see what test we could possibly subject it to that would be anywhere near decisive, since the best it could do would be to exhibit as many behaviours indicative of consciousness as I do, and that’s not enough to bridge the plausibility gap the results from its physical differences from me.

Leave a comment