Since I’m contributing pretty regularly now to Big Ideas, I’d say this blog isn’t just sitting quietly in the corner; it’s probably expired. I’ll move the relevant postings across to Big I but am leaving this site up so that old links continue to work.
For various reasons (upcoming exams, day job, other things) this blog will probably go quiet, like a child who knows he’s doing something naughty, for the next month or so. We’re focussing some of our energies on organising more Big Ideas events, and I’ll continue to contribute philosophical tidbits to the blog there in the interim.
Dale Purves has some amazing optical illusions that appear to illustrate the old epistemological gap between our perception of reality and reality itself:
Information in visual stimuli cannot be mapped unambiguously back onto real-world sources, a quandary referred to as the “inverse optics problem.” The same problem exists in all other sensory modalities
(from this page). This ties in neatly with my (as yet incomplete) discussion of Davidson’s rejection of “conceptual schemes”.
The most natural response to optical illusions and general perceptual mistakenness is to posit an objective reality and a conceptual scheme — a sort of map or diagram — of it that we carry around in our heads. Our schemes can be inaccurate and idiosyncratic; in details, at least, mine might be quite different from yours. Yet if they’re too far-out then we start having problems, because reality doesn’t behave the way we think it will. A startling example of this happens if you’re ever played a computer game with the left/right controls swapped over. It takes a while to adapt, because your conceptual scheme predicts one thing, but the outcome of the resulting action is always the opposite.
As a result, so the story goes, we adapt our conceptual schemes so that they get better at predicting how our actions will turn out:
Much to the advantage of the observer, percepts co-vary with the efficacy of past actions in response to visual stimuli, and thus only coincidentally with the measured properties of the stimulus or the underlying objects.
This is the “third dogma of empiricism” that Davidson rejects.
I’ll have more to say about this later, for now just enjoy the illusions — some of them are genuinely startling if you haven’t seen that sort of thing before. If you need more convincing that your senses are fallible, try these:
This paper by Alex Voorhoeve (hat tip to the invaluable OPP) presents an interesting paradox and attempts to resolve it. The paradox concerns choosing between several different scenarios and ranking them in order by saying one is “better than” another. What makes it go is the fact that the choice has two “dimensions” that change in different ways.
Here’s the setup. A person X has a disease. Fortunately there are a number of different treatments available for it. Unfortunately, they all have negative side effects. The outcome of each is the same; if treated, X will live for another 10 years, and if not then X will dies after only 1 year. X is therefore highly motivated to accept one of the treatments.
We can list the treatments in order depending on how severe the side-effects are. T0 is agonizingly painful, but after a week it’s complete and X need not be subjected to any other treatments. T1 is still extremely painful, but slightly less severe than T0; the downside is that it lasts for two weeks. So it goes on; for any Tn, Tn+1 is a little less painful, but lasts a lot longer.
Most people think that, wherever you are in the chain, Tn will always be better than Tn+1. A small reduction in the unpleasantness of the side-effects just isn’t worth the trade-off of a much longer period of unpleasantness. This is called a “pairwise” comparison; although there are many treatments, to decide between them we tend to compare just one with just one other. If we do that, we might expect to conclude that X should bite the bullet and choose Tn.
Yet imagine that Tx is a treatment that takes many years, but has such a low level of associated discomfort that it’s really not a big problem. If presented with this option, most people would certainly prefer this over T0. The paradox is that the reasoning that has X prefer Tn over Tn+1 leads to a choice of T0 when we can all see that Tx is a better option.
But then if Tx is good, why not Tx-1, which is only slightly worse and last much less time? So by comparing pairwise we end up back at T0 again, and again we find ourselves making a poor choice.
I think this situation is really a sorites paradox in an unusual disguise. The reason Tx is appealing is that there’s a level of unpleasantness that we consier negligible, and that functionally we don’t care much about. yet we can’t pinpoint where, along the chain, we hit that point. There’s not obvious cutover but a gradual change that lands us in a very different place from where we started.
Voorhoeve and the authors he cites have interesting things to say about how we make decisions, and how we perhaps ought to make decisions, in situations like this. This is important and interesting because sorites series like T are to be found in many, many real-life problems.
Donald Davidson has an interesting argument against strong conceptual relativism. This is the position that people who speak very different languages from ours may do so because they have different conceptual “schemes”, or maps of what the world is made of and how it fits together. This is most famously articulated in the Sapir-Whorf Hypothesis whose argument is that the grammatical resources of your language either reflect or determine the structure of your conceptual view of the world, your conceptual scheme.
This suggests that there may be people whose view of the world is very radically different from our own, and we may be able to tell by looking at how different their language is from ours. It also follows that, unless one can show one language superior to another, one cannot argue that one scheme is superior to the other either. The result is, as Davidson says, “heady and exotic”.
I’ll try to outline his argument against it in thsi post, and follow up later with what I think is amiss with it. First, what we mean by “radically different” languages is those that are not, in general, translatable one into the other. If each dealt in a completely different set of concepts from the other, how could one possibly be translated into the other without simply re-framing what the one was saying in the other’s terms, thereby losing the difference we’re interested in? So conceptual schemes are like equivalence classes over the set of all languages; one language is translatable into another if and only if they fall under the same conceptual schema. In this way Davidson aims to reduce two problems — radical language difference and radical conceptual schema difference — to one.
But given this simplification, how can Sapir possibly know that one language differs radically from another? The language is only intelligible to him, presumably, because he can translate it into one with which he’s familiar. Indeed, Davidson argues that somebody using a language that resisted all attempts at translation would perhaps not be using a language at all, or at least we wouldn’t be able to recognise it at such. It might look like a rule-governed activity in which two people participate, but so does a game of chess, and nobody thinks chess is a language.
If that’s the case, and if languages and conceptual schemes are bound tightly together, it’s hard to see how we’d make sense of somebody who had a radically different conceptual scheme from ours. If we can’t grasp someone’s thought process roughly in terms of our own then what could possibly lead us to believe they’re thinking at all?
What Davidson disputes is the idea that there can be radical differences between conceptual schemes, which would manifest themselves as languages that were, for the most part, mutually not translatable. But he goes on to claim that if our “conceptual schemes” are all more or less the same, perhaps with some local variations, then “conceptual schemes” don’t actually give us anything and we may as well dispense with them.
The idea he’s really up against here is the “third dogma of empiricism” — the idea that there’s an outside, objective world that comes to us view sense impressions, and that those impressions are filtered and organised by our conceptual scheme. If someone has a radically different scheme from us then they may experience the world very differently, although it will be the same world.
So the attack goes like this:
- Conceptual schemes go with languages.
- Mutually unintelligible schemes go with mutually untranslatable languages.
- If presented with a language we couldn’t translate, we would be unable to say what kind of conceptual scheme it embodied precisely because we can’t translate it.
- In extreme cases we may not be able to recognise a behaviour as linguistic (or conceptual) at all if it were sufficiently different from our own.
- We can’t make sense of the idea that different conceptual schemes exist.
- If we can’t imagine a situation in which we might be led to believe in something by evidence, experience, a priori argument or other sensible means then we don’t have ontological room for it.
- Consequently, the whole idea of a “conceptual scheme” fails to explain anything and should be abandoned.
Davidson draws a startling conclusion from this — that any conceptual scheme that was in any way intelligible to us as such would be substantially similar to our own. Hence there really is only one conceptual scheme, which si why he thinks it’s superfluous. As a result, he concludes that we won’t go far wrong in imagining that linguistic truth-value is all there is to “truth”, because there’s really only one way for us to “carve up” reality and “organise” the parts. I find this as heady and exotic a suggestion as the strong Sapir-Whorf hypothesis itself.
I hope I haven’t set up a straw man here. In any case, I think the argument as I’ve sketched it above is interesting, and so, with apologies to Davidson, I’ll take a run at disputing it in an upcoming post.
Ever since I started using WordPress, I CAN HAS CHEEZEBURGER? has been their number one blog. This site deals in the form of humour known as lolcats, which entertained the usually erudite folks over at Language Log for a while.
I don’t get lolcats. That is, in a sense I do get them. I’m very fond of cats. I’m as amused as the next pet owner by imagining they’re small, not very bright humans, and taking a guess at what they’re thinking. So I understand why lolcats are funny; they just don’t make me laugh. Everyone thinks this kind of thing is fine, which makes humour part of a very curious category of activities.
Let’s look at some other sorts of things. For example, if I agree that a scientific theory is supported by plenty of evidence and otherwise meets the criteria for being a good theory, I can’t just shrug and say that I don’t find it true. If I think that rolling back the state should be the first priority of the government and you disagree then at least we can have an argument about it on the basis that one of us must be wrong. Even if you believe Coldplay are better than Mozart, I think we can have a sensible discussion about it. But if I don’t find lolcats funny and you do then there’s really nothing much more to talk about.
I ordered the examples above in a deliberate way to suggest that there might be a continuum here between things it’s not OK to treat as mere “matters of taste” and things it is. If when I meet you you tell me you’ve just had a haircut, I can’t refuse to believe you simply because I don’t find the idea “truthy“; if I don’t believe you then I need to have a reason, which I can state (“You mean you paid for it to look like that? Come off it”).
At the other end of the scale, I can look blankly at a lolcat you find hilarious and just not find it funny. The idea that some matters, like humour and perhaps aesthetic judgements are purely subjective is commonplace, and seems to be a natural counterpart to the idea that some matters are objective and amenable to rational, empirical or other public sorts of enquiry.
Probably the most famous philosophical investigation of humour is Bergson’s Laughter (link is to the full text). Freud wrote an influential book about jokes too. But what both of these, and most other writers on the subject, do is try to say what it is that makes some x a joke; what puts it into the “humour” category. The problem is that comedy is just too broad a church to allow for this. Try reading any book purporting to explain what humour is with lolcats in mind.
My suspicion is that humour is a fuzzy class, and that we recognise that x a joke thanks to its family resemblances with other things we already know as jokes. That doesn’t explain what humour is for, or why we have a seemingly involuntary physical reaction to it, but I think it does away with any attempt to classify it rigidly.
What that means, though, is that humour may not be a special category of utterance, safely insulated from other things that might give us cause for epistemological concern. Indeed, it suggests that perhaps the distinction between objective and subjective discourses may not be at all firm.
The usual worry is that this recoils into a trivial sort of relativism; what’s true for you need be true for me only inasmuch as your lolcats are funny for me. Yet as Nietzsche points out, a firm dividing line can pose just as much of a problem:
For between two absolutely different spheres, as between subject and object, there is no causality, no correctness, and no expression; there is, at most, an aesthetic relation: I mean, a suggestive transference, a stammering translation into a completely foreign tongue-for which I there is required, in any case, a freely inventive intermediate sphere and mediating force.
This reminds me of Deleuze’s rapturous but entirely speculative essay “He Stuttered”, which considers literary language as a language close to chaos. One could quite easily mistake Deleuze for a mystic who believes transcendental, objective reality is accessible only through poetry.
But I think Nietzsche’s point is more interesting than that; I think he’s simply pointing out the the objective/subjective distinction is a species of dualism, and it’s open to the attack that all dualisms can fall prey to, which is that they posit separated and qualitatively distinct worlds but beg the question of how these worlds can ever be connected. A strong proponent of the distinction might find herself committed to solipsism just as easily as someone who rejected it risks relativism. More on this to follow, undoubtedly.
I was talking the other day with a Big Ideas co-conspirator about what we were doing with BigI, and what else it’s like. We came up with the term “physical blogging” to describe it. By “physical blogging” I mean doing something you like to do online, but IRL, as we used to say in the old days.
This got me thinking about the whole idea of a “virtual” world that exists parallel to, but separate from, the physical or “actual” one. That doesn’t have to be a metaphysical idea. We can say that the virtual is dependent (philosophers would say “supervenient“) on, and fully reducible to, the physical, but that it remains a useful abstraction because reducing things we do online to a bunch of changes of state of some silicon or electrons isn’t very illuminating.
Sites like Second Life seem to be all about trying to move those things-you-want-to-do inside your computer. Here’s “Slim Warrior”, a singer (I think) who performs in Second Life (from here). She’s comparing “attending” a “concert” in Second Life with listening to an MP3 you found on the web:
Because you have a visual aspect as well, then you get a far more interesting immersive experience. So, rather than going to listen to music uploaded on a site, you are immersed in that show, without of course the hassle of having to get to a concert, it is right there in your own home.
There’s a narrative about the internet that it’s more convenient to do things online than it is to do the “same” things in the physical world. Given a task, it says, most people prefer to do it online if they can. Then of course there are those luddites for whom doing anything online is a chore; to them, there was nothing wrong with theoffline way of getting things done in the first place.
This springs from taking the virtual/actual dichotomy too seriously, and from the idea that there’s a one-to-one correspondence between things we do in the virtual world and their physical counterparts.
But even something as simple as buying a book on Amazon is radically different from buying a book in a shop. There are some similarities, true, but really these are just two distinct activities, not actual and virtual versions of the same “thing”. What sort of thing would that be, anyway? Actual or virtual? Or something else?
The dichotomy could be thought of as a metaphor with mind-body dualism; just as to everything physical there corresponds an idea in the mental realm, so to every activity IRL there is (or ought to be) a virtual equivalent.
In fact we can create cultural forms that cross from one world to the other, and do so all the time. When I order a book on Amazon it isn’t a virtual book that arrives but a physical one. People use the internet to organise, talk about or otherwise enhance things they do in the physical world, and sometimes vice versa.
Some of the media interest in Second Life is even concerned with the way in which it crosses back into Real Life. Second Life users make virtual money they can convert into real money. They (really) sue one another over (virtual) copyright infringements, hold (virtual) meetings to make (real) business decisions and so on.
The key thing here is that we tried to take things we do in Real Life and translate them into the virtual world. This activity happens to produce real-world side-effects, both intended and not.
Physical Blogging travels in the other direction. We take something we like to do online and look for (or create) something analogous in the physical world. As a side-effect maybe some web content might get generated, say, or some discussions may be carried on online, but the important bit is the thing that happens in the middle.
Both examples of world-crossing can seem surprising until you notice all the ways we already do this kind of thing. The border between virtuality and actuality starts to look more and more fuzzy. Perhaps this is just the internet becoming more mature, and more integrated with the other things we do in our lives.
If you’re a certain excitable sort of cultural critic, you can take this sort of thing to indicate a Baudrillardian collapse of the virtual/actual distinction. In fact Baudrillard himself isn’t so excitable. In fact I think it simply indicates that the dualistic metaphor is exhausted and no longer helps us think about how the internet fits into our lives any more. Perhaps the things we used to put in the ontologically separate “virtual world” are now just different sorts of things that live in Real Life.
This week we enjoyed a splendid Big Ideas session on consciousness, and particularly on the question of whether consciousness can be evoked by software. I have no background at all in the philosophy of mind, so this really got me thinking.
By instinct I’m a sort of physical reductionist about lots of things, including the mind. As far as I know, nothing important has ever been found that doesn’t have a plausible physical explanation. Neuroscience appears, at least to an outsider, to provide strong evidence of a physical origin for the mind, and the mental effects of brain damage are a striking example of that. In any case, dualism offends against perspicuity, and it’s a lot of ontological machinery to swallow for just one problem. In other words, I think mind-body dualism uses a sledgehammer to crack a nut.
That said, I’m not what I’d call a “discursive” reductionist — someone who thinks that ordinary talk of the mind as something separate from the body is harmful. In fact I suspect that talking about ideas or emotions in purely neurological language would be extremely unwieldy even if we were able to do it. So I don’t object to dualistic talk on the understanding that, if we needed to do so, all such talk could in theory be reworded in physical terms.
All that might suggest a “hard AI” position; since the physical processes that produce consciousness can be modeled in software, they can produce consciousness there, too. But as it stands this is a category mistake. We can very easily model the motion of projectiles using a computer, for example, but that doesn’t mean I can use it to fire a cannonball. All I have in software is a simulation of a projectile, not the real thing. Perhaps consciousness is like that, or perhaps not. Thinking sometimes does feel a bit like running a computer program, and software can do things that look intelligent, like learn how to play chess well. We don’t know enough about it to be sure.
One thing that makes this hard is that we’d need some way of knowing whether a machine we’d programmed was conscious or not. But I don’t even have a way of telling whether the person sitting opposite me is conscious; all I can do is infer that consciousness from the fact that they seem very like me in many other ways. In other words, I think we believe that other humans are conscious in part because they’re the same species as us, and we know we share more of our important macro-level features.
I think my cat might be conscious because it shares many — although by no means all — of my important features. I don’t think the animated Mickey Mouse is conscious, though. He behaves a lot more like I do than like my cats, but physically all Mickey Mouse is is many thousands (millions?) of drawings on pieces of paper. As such he’s not much like me physically; much less so than my cat. For that reason I disregard his seemingly conscious behaviour as an illusion. In this case, of course, we know it’s a deliberate illusion, which makes it easier to decide.
So where does that leave us with a conscious computer? As a very intricately-shaped stone, a computer is physically extremely unlike me. It isn’t alive, or made of the same matter, nor did it come about in a similar way. I’d be inclined not to believe it was conscious. It’s at the Mickey Mouse end of the scale, not the end I and my cat occupy. I suppose this is probably a multi-dimensional scale — with dimensions like “species-similarity”, “behaviour-similarity”, “likelihood of deliberate deception” and so on — but I trust you get the point.
Given that, what would convince me it was conscious? Or, which is a similar question, what would convince me that a zombie wasn’t conscious? If it looks like a duck and quacks, roughly, like a duck, I’m inclined to believe it is one. A zombie would have to behave egregiously non-consciously before I’d be inclined to believe it wasn’t conscious, and I think a machine would have to exhibit some kind of extra-conscious behaviour to convince us it was really conscious.
That’s a problem because, while I can imagine things being less conscious than I am, I can’t imagine anything more so. What would it do that would be different from the things I do? How would I know it was extra-conscious? So although I have no idea whether a machine could be conscious, I’m quite sure it couldn’t be convincingly conscious. I certainly don’t see what test we could possibly subject it to that would be anywhere near decisive, since the best it could do would be to exhibit as many behaviours indicative of consciousness as I do, and that’s not enough to bridge the plausibility gap the results from its physical differences from me.
The Telegraph newspaper yesterday carried the kind of batty comment piece only it can produce. This one is by Simon Heffer, and he sets out from the familiar charge that the BBC has a Left-wing bias. The evidence he presents for this is that everybody knows it’s true. Let’s entertain him in this, because it’s not his main point.
More interestingly, though, he concludes that the elimination of bias is either unachievable or undesirable:
With digital broadcasting and the internet, the case for the BBC is highly questionable. So is the case for impartiality.
He gives no evidence for this, either, but for the sake of enquiry let’s again take him at his word.
The picture he paints of the future of news reporting is this:
It is time to let a thousand flowers bloom. The need for political balance and all other forms of impartiality should be ended.
If someone wants to set up a Marxist channel that talks only about Scotland, let him. The market will decide, and that, given the feasibility now of real choice, is how it should be.
Reading this reminded me strongly of the philosopher Richard Rorty, whose death earlier this month did not go unremarked in the blogosphere. I didn’t post about him at the time because I don’t know his work well, although he was a figure who divided people very strongly when I was pottering about in academia a decade or so ago.
Rorty’s hard claim — crudely put, that epistemology is hopeless, and that statements should be used as artworks or weapons rather than in the pursuit of truth — seems to underlie this idea that the news can be privatised; that is, not only the processes of producing the news, but the news itself. Heffer’s remarks point to a deep skepticism about truth, and an idea that monetarism (yep, Milton Friedman and, erm, Enoch Powel get namechecked) can even replace traditional empirical procedures as a way of determining what’s true or false. Otherwise, why would you embrace the market model he proposes rather than fleeing from it?
I was and remain something of an anti-foundationalist, so I have some sympathies with Rorty’s position, although in the end I find it no more helpful than the positivists’ dismissal of metaphysics as simply not interesting. I never quite bought the idea that radical relativism somehow implies a right-wing liberal politics; this essay sums up better than I could some of the arguments over that.
But I do wonder whether the converse is happening; for all the conservatives’ reaching for big ideas (Letwin had a go at this, and I had a go at him for it, and on Monday Cameron did the same in yet fluffier language), I wonder whether the marketisation of every single area of life, even epistemology, is really their fundamental ideology.
I’ve certainly noticed this in Telegraph commentators before. For instance, a couple of months ago I got into an ill-advised bit of comment-writing over a piece by David Millward whose sole purpose seemed to me to be to propose the abolition of a tax because taxes are bad. Taxes are bad, of course, because they’re not phenomena that emerge naturally from a market. If they were, they’d be great. For instance, in Telegraph ideology, paying school fees is good, but paying taxes to fund a state school system is bad, even though the former actually costs more money than the latter. It’s a matter of principle, not self-interest.
All this death and relativism brings me neatly to the demise of Bernard Manning, and particularly the obit he wrote for himself in the Daily Mail. Here he goes:
I don’t think the Commission for Racial Equality will be holding a wake for me, either. Nor will the Lesbian and Gay Rights lot or the feminists. [...]
It was their campaigning that kept me off mainstream television for years
In their obsession with turning comedy into a branch of Left-wing politics, they forgot that the only point of jokes is to make people laugh. And that was what I was good at, whether I was on the cabaret circuit in Manchester or at the MGM Grand in Las Vegas.
There it is again, a certain success in the market proving certain ethical and political principles invalid, suggesting that such principles can be tested by experiment. In fact I find most of Manning’s obit sensible and humane, and what he says there concurs with the way he’s spoken whenever I’ve seen him interviewed on TV. I don’t have an opinion on Manning’s act becaue I never saw it, but I think the same ideology is at work in these words as the one that lies behind Heffer’s.
I always thought Rortians and their opponents, both of whom claimed that the denial of transcendent Truth had far-reaching practical consequences, were just making a category mistake. Non-philosophers know that deep questions in metaphysics — whether anything exists except me, whether other people have minds, whether things have Platonic essences — are really irrelevant to getting things done. As a pragmatist I think Rorty himself must have felt similarly, although the bits of his work I’ve read don’t always suggest that.
I suspect this insistence on the free market is a similar kind of mistake. It springs from a skeptical project that, like Rorty’s approach to Truths, claims to discredit the idea of the State, but the capitalisations are important here. Rorty’s target isn’t the ordinary truth that we all live by; I’m quite sure he was capable of arguing over whether or not it was true that he’d been paid for some speaking engagement, for example, without being silenced by metaphysical doubt. Likewise, the proper target of the critical project of laissez-faire economics isn’t any particular system of government but some idealised form of the State. The step from the particular, ideal case to the general case leads to the category mistake.
We’re hearing a lot coming out of Conservative Central Office just now about Labour being in favour of big government, and the phrase “nanny state” is being bandied around once again. Their proposal is always to deregulate, decentralise and devolve. There’s nothing wrong with this, and in some cases it’s a good idea, but it’s interesting, at least to me, how well the rhetoric concords with a monetarism that’s been turned into a sort of metaphysical belief system, and that now informs practical policy.
[UPDATE: Feeling guilty for posting on Rorty without knowing much about him, I just read his "Pragmatism, Relativism and Irrationalism" and found it impressively sensible. I know there are deep controversies here, but on this evidence Rorty at least doesn't seem to be the cartoon character some have painted him as.]
I found the following video at this site, a charming topological “advent calendar” created by Oliver Labs, Hans-Christian Graf von Bothmer and some of Bothmer’s students. Most of the topics require some foreknowledge, but I thought I’d try to explain one of them in layman’s terms, just for fun.
Here’s the video; watch it before reading on (try to ignore the flying-through-space-and-landing-in-Battery-Park bit at the beginning; oh, and no, there’s not supposed to be any sound):
OK, so the setup is that you’ve got an infinite, flat surface called the “plane”, and on it we’ve balanced a sphere, like a snooker ball on a table. We’ve identified a point at the top of the sphere that we’ll call the “north pole”, for obvious reasons.
Now, pick any other point on the sphere. There’s a unique straight line that passes through the inside of the sphere starting from the north pole and touching the point you picked. In the video this line looks like a red laser beam. It continues beyond the chosen point until it hits the plane somewhere. (Try looking at another description of this here if you find this confusing).
By moving the chosen point around on the sphere, you can pick out different points on the plane with the laser beam. In fact it’s not hard to see that you can pick out any point on the plane just by picking a suitable point on the sphere. Just turn the beam around so it’s pointing in the right general direction and then raise or lower the angle until it hits the desired point on the plane. The video illustrates this kind of “targeting” pretty nicely. The point on the plane is called the “stereographic projection” of the chosen point on the sphere.
Have a look at the nice animations here to help get an intuitive handle on this. There’s also an amusing application of stereographic projection to the argument for a flat earth here (“not accepted by real scientists”).
The first thing to say is that this proves that there are the same number of points in the plane as there are in the sphere, even though one is an infinite “expanse” and the other is something you could hold in your hand. Well, there are almost the same number.
Actually, weirdly, in a sense the sphere has “one more” point, which is the north pole itself. As the end of the video illustrates, as our chosen point on the sphere gets closer to the north pole, its stereographic projection gets ever further away. Intuitively, when the chosen point actually is the north pole, the laser beam is parallel to the plane and never hits any point on it.
This tells us something deep and topological about the relationship between the plane and the sphere. In fact, it tells us that the plane isn’t compact. “Compactness” is a technical term that in essence means “not missing any points that have certain characteristics”. The sphere, on the other hand, is compact. Since compactness is a “topological invariant”, no matter how stretchy the sphere was, or how many dimensions you had to work with, you could never turn a sphere into an infinite plane, or vice versa.
The stereographic projection does suggest a way to make the plane compact, though. We’ll simply invent a new point, called the “point at infinity“, and add it to the plane. Then we’ll define it to be the stereographic projection of the north pole, no matter which angle we shine the light in. This is known as the “Alexandroff one-point compactification” of the plane, and is important in many areas of mathematics, especially the theory of complex numbers in which the sphere is known as the Riemann Sphere.
I’d like to make this a semi-regular series; as it happens I couldn’t find a good, non-techincal description of compactness online, so maybe next time I’ll try to provide one.