Archive for the 'Science' Category

Optical Illusions and Perception

15 July, 2007

Dale Purves has some amazing optical illusions that appear to illustrate the old epistemological gap between our perception of reality and reality itself:

Information in visual stimuli cannot be mapped unambiguously back onto real-world sources, a quandary referred to as the “inverse optics problem.” The same problem exists in all other sensory modalities

(from this page). This ties in neatly with my (as yet incomplete) discussion of Davidson’s rejection of “conceptual schemes”.

The most natural response to optical illusions and general perceptual mistakenness is to posit an objective reality and a conceptual scheme — a sort of map or diagram — of it that we carry around in our heads. Our schemes can be inaccurate and idiosyncratic; in details, at least, mine might be quite different from yours. Yet if they’re too far-out then we start having problems, because reality doesn’t behave the way we think it will. A startling example of this happens if you’re ever played a computer game with the left/right controls swapped over. It takes a while to adapt, because your conceptual scheme predicts one thing, but the outcome of the resulting action is always the opposite.

As a result, so the story goes, we adapt our conceptual schemes so that they get better at predicting how our actions will turn out:

Much to the advantage of the observer, percepts co-vary with the efficacy of past actions in response to visual stimuli, and thus only coincidentally with the measured properties of the stimulus or the underlying objects.

This is the “third dogma of empiricism” that Davidson rejects.

I’ll have more to say about this later, for now just enjoy the illusions — some of them are genuinely startling if you haven’t seen that sort of thing before. If you need more convincing that your senses are fallible, try these:

Some old classics at Color Cube
Beautiful new illusions by Akiyoshi Kitaoka
Beautiful illusions by Akiyoshi Kitaoka
Interesting illusions by Michael Back
Best Visual Illusion of the Year Contest

Zombies and Other Minds

1 July, 2007

This week we enjoyed a splendid Big Ideas session on consciousness, and particularly on the question of whether consciousness can be evoked by software. I have no background at all in the philosophy of mind, so this really got me thinking.

By instinct I’m a sort of physical reductionist about lots of things, including the mind. As far as I know, nothing important has ever been found that doesn’t have a plausible physical explanation. Neuroscience appears, at least to an outsider, to provide strong evidence of a physical origin for the mind, and the mental effects of brain damage are a striking example of that. In any case, dualism offends against perspicuity, and it’s a lot of ontological machinery to swallow for just one problem. In other words, I think mind-body dualism uses a sledgehammer to crack a nut.

That said, I’m not what I’d call a “discursive” reductionist — someone who thinks that ordinary talk of the mind as something separate from the body is harmful. In fact I suspect that talking about ideas or emotions in purely neurological language would be extremely unwieldy even if we were able to do it. So I don’t object to dualistic talk on the understanding that, if we needed to do so, all such talk could in theory be reworded in physical terms.

All that might suggest a “hard AI” position; since the physical processes that produce consciousness can be modeled in software, they can produce consciousness there, too. But as it stands this is a category mistake. We can very easily model the motion of projectiles using a computer, for example, but that doesn’t mean I can use it to fire a cannonball. All I have in software is a simulation of a projectile, not the real thing. Perhaps consciousness is like that, or perhaps not. Thinking sometimes does feel a bit like running a computer program, and software can do things that look intelligent, like learn how to play chess well. We don’t know enough about it to be sure.

One thing that makes this hard is that we’d need some way of knowing whether a machine we’d programmed was conscious or not. But I don’t even have a way of telling whether the person sitting opposite me is conscious; all I can do is infer that consciousness from the fact that they seem very like me in many other ways. In other words, I think we believe that other humans are conscious in part because they’re the same species as us, and we know we share more of our important macro-level features.

I think my cat might be conscious because it shares many — although by no means all — of my important features. I don’t think the animated Mickey Mouse is conscious, though. He behaves a lot more like I do than like my cats, but physically all Mickey Mouse is is many thousands (millions?) of drawings on pieces of paper. As such he’s not much like me physically; much less so than my cat. For that reason I disregard his seemingly conscious behaviour as an illusion. In this case, of course, we know it’s a deliberate illusion, which makes it easier to decide.

So where does that leave us with a conscious computer? As a very intricately-shaped stone, a computer is physically extremely unlike me. It isn’t alive, or made of the same matter, nor did it come about in a similar way. I’d be inclined not to believe it was conscious. It’s at the Mickey Mouse end of the scale, not the end I and my cat occupy. I suppose this is probably a multi-dimensional scale — with dimensions like “species-similarity”, “behaviour-similarity”, “likelihood of deliberate deception” and so on — but I trust you get the point.

Given that, what would convince me it was conscious? Or, which is a similar question, what would convince me that a zombie wasn’t conscious? If it looks like a duck and quacks, roughly, like a duck, I’m inclined to believe it is one. A zombie would have to behave egregiously non-consciously before I’d be inclined to believe it wasn’t conscious, and I think a machine would have to exhibit some kind of extra-conscious behaviour to convince us it was really conscious.

That’s a problem because, while I can imagine things being less conscious than I am, I can’t imagine anything more so. What would it do that would be different from the things I do? How would I know it was extra-conscious? So although I have no idea whether a machine could be conscious, I’m quite sure it couldn’t be convincingly conscious. I certainly don’t see what test we could possibly subject it to that would be anywhere near decisive, since the best it could do would be to exhibit as many behaviours indicative of consciousness as I do, and that’s not enough to bridge the plausibility gap the results from its physical differences from me.

Neurological Aesthetics of Music

8 June, 2007

Last Saturday’s Guardian newspaper included this article by Daniel J Levitin, which concerns the scientific (specifically, neurological) evidence for the hypothesis that everyone loves the Beatles. I’ve seen this sort of thing before. As someone who positively can’t stand the Beatles, I always find these things troublesome, as they seem to imply there’s something wrong with my poor brain, additional to the things I already know are wrong with it.

This idea that brain-scans can confirm our intuitions about complex cultural phenomena is pretty widespread, especially in weekend newspapers. Levitin himself has an impressively sciencey list of publications, mostly not on music and some involving mice, which I’m in no way qualified to review. There’s a lot to disagree with in the Guardian article on factual or very basic argumentative grounds, but I’ll leave that aside and focus on the main thrust.

The question he sets out with is:

Will [The Beatles’] songs continue to inspire future generations? Or will their music die along with the generation intoxicated by their wit and charisma in the mind-expanding 60s? […] [W]ill they last in the way that Mozart and Beethoven have lasted?

Now, this is not, on the face of it, a question that can be answered scientifically. Even if we had a quantitative and objective measure of musical quality, it might be that The Beatles would score very poorly on it, yet they’d continue to be popular for centuries because people are idiots. On the other hand, there are plenty of composers and musicians who are thought by afficionados to be really superb, but whom our culture en masse has ignored, because people are idiots.

I don’t see why longevity or popularity should be highly correlated with musical “quality”, assuming the latter term has any meaning. One meaning you could give it is an economic one, and one hears from time to time a defence of the idea that the best art is the art that makes the most money, or attracts the most consumers. This, however, is not at all what Levitin is talking about.

His working assumption is this:

Great songs activate deep-rooted neural networks in our brains that encode the rules and syntax of our culture’s music. Through a lifetime of listening, we have learned what is essentially a complex calculation of statistical probabilities of what chord is likely to follow what, and how melodies are formed.

There is an old (and much-contested) school of musicology that finds aesthetic value in the setting-up and defeating of expectations. There’s more to it than that, obviously, but that’s the rough idea. I guess this is okay as a working hypothesis, but there are three things I object to about what Levitin does with it.

First, this is not the only available theory of musical value, and it doesn’t work for a lot of musical traditions. The danger of adopting an aesthetic criterion universally is that it forces you to conclude that music that doesn’t fit with it is junk. It’s also pretty easy to construct counter-examples of music that ought, according to the theory, be really great, but is in fact awful.

Second, I very much doubt that Levitin has been able to validate this assumption with any rigour in the laboratory. If he hasn’t then it’s pretty irresponsible to write as if it were fact in a popular newspaper.

Third, and most important, it seems self-evident to me that neuroscience can tell us nothing about the relative value of different kinds of music. All it can tell you is how an individual is responding to a piece of music and then treat many such observations statistically. If I like The Beatles (did I mention I don’t?) and you like Ravi Shankar, no brain scan in the world can tell Dr levitin whether The Beatles are better than Ravi Shankar (they aren’t). I’m also a bit suspicious that the neuroscience is mainly there to beef up the dodgy aesthetic argument.

I just wanted to end with this:

The timelessness of Beatles melodies was brought home to me by Les Boréades, who have recorded three CDs of Beatles music arranged for and played on baroque instruments.

There’s no philosophical point to make here, it’s just that the very idea of this makes me feel queasy.