(maybe you want to visit my front page...)

There was a spectacularly silly article by somebody called Chris Ormell in the Times Higher Education Supplement a few days ago.

A physicist, Jon Butterworth, has published a rebuttal in the Guardian, but I think that it engages with the subject matter much more than is necessary.

In my mind, the problem is not merely that the author of the original is ignorant (though he is) but that he's producing nonsensical arguments, which wouldn't even make sense if he knew the subject better. No detailed factual evidence is required to point this out.

I suspect the latter author was keen to show off all those gorgeous pictures of the universe and actually educate the public a little: while it's logically unnecessary it's perfectly understandable on Butterworth's part.

I comment in detail below, and then editorialise a bit.

Here is an extended quotation of Ormell's essay:

So were the values of c and G - the speed of light and the gravitational constant - the same a million years ago as they are today? It must surely stick in the throat to say that we are quite sure about this. At one time people thought that the magnetic poles were absolutely fixed. Now we know that they are on the move. So let's admit that there is a minute element of doubt here, say 1 per cent. If so, we can be 99 per cent sure that these physical constants were the same 1,000,000 years ago as they are today. If we follow logic - and being blindly sure that the mathematics is right should hardly incline us to reject logic - then we can be only (0.99)

^{2}x 100 per cent sure that these constants were the same 2,000,000 years ago.And (0.99)

^{100}x 100 per cent sure that they were the same 100,000,000 years ago. This is 36.6 per cent, a reasonable figure perhaps, given that 100,000,000 years ago was a long time ago.Cosmologists assure us that the Big Bang happened 13.7 billion years ago, that is, 13,700 million years ago.

So what is the figure for the degree to which we can be sure that the physical constants c and G were the same then? Clearly it is (0.99)

^{13,700}x 100 per cent, which comes out as 1.59 x 10^{-58}per cent.

For anyone who is led astray by the author, here's an entirely parallel stupid argument:

Occasionally, people are pronounced dead in error. So, given someone who is pronounced dead, there is in fact maybe a chance of one in a million (I don't know the precise number, but let's say that that's it) that he or she will in fact be walking around a week later as if nothing had happened. But that means the chance of them still being regarded as dead a week later is only 999,999 parts in a million.

Therefore the probability that they'll still be regarded as dead a million years later is about (999999/1000000)^{52000000} (since there are about 52000000 weeks in a million years), which is so close to zero as to be negligible.

So, we've shown it's almost certain that practically all dead people will be walking around in a million years' time, or, in the parlance of horror movie posters **the dead will walk the Earth!** Or maybe not.

So, I am completely bewildered by this. If Ormell is attempting to demonstrate that inane misapplications of mathematics produce stupid answers, he's demonstrated it perfectly well. But this is something that's well-known, and it obviously doesn't need such a highly technical context to point that out; there are loads of famous elementary examples.

But I think he's trying to argue something else. I have no idea what.

The writing about pure mathematics is still more incomprehensible. Not only do I not understand the point he is trying to make, but I actually can't be certain I understand the arguments he is using. I have tried to guess.

Anyway, he says:

Georg Cantor produced an argument that seemed to point to transfinite immensities, but that was before we realised that mathematics was incompletable.

Cantor's arguments still point to a hierarchy of sizes of infinity (which is what I think he means by this) just as much as they ever did. And the fact we can only refer to countably many objects doesn't make this irrelevant: it underscores the importance. We can't possibly make a list of all real numbers: that means that the arguments we use for the reals must be of a different character to the arguments we used for the positive integers.

The word

I can't think of any relationship between the sizes of infinite sets and incompleteness, and certainly no excuse for throwing them both into the same article and hoping blindly that it'll make sense. Please email me (`jdc41` at `cam` dot `ac` dot `uk`) if you have any clue whatsoever.

At any rate, it seems very difficult even for a working pure mathematician to have any clue what this segment of the article is trying to impart. God only knows what a layman would think.

I'm frequently worried by attempts to apply philosophy to real life. Sadly, all too often they seem to take the following form:

- Take some topic (often one in which a practical understanding has already been reached).
- Abstract it to some more troublesome general question, possibly concealing some logical error.
- Employ reasoning on that troublesome general question, possibly again committing heinous logical errors (which are harder to spot, because of the unfamiliarity of the setting).
- Make some conclusion which is by now very unlikely to be justifiable.

In certain areas, I think this is quite damaging. The application of philosophy to political questions is frequently unsavoury, for example. But this article is another example: it purports to attack a question in educational philosophy via many abstractions with serious logical errors in each.

More practically, I worry that it is easier for a philosopher of mathematical education to publish a popular article which is daft, wrong and irrelevant to maths education, than it is for a mathematician to publish a popular article containing mathematics. There's something wrong with the tastes and priorities of newspaper editors.