Scientists Aren’t Always Complete Idiots

I don’t like to spend too much time highlighting and making fun of silly things on the internet; it’s not like they are going to be stamped out by a few well-placed blog posts. But this awful little article by Chris Ormell in Times Higher Education merits an exception. It has been ably demolished Jon Butterworth at the Guardian, but is worth revisiting, because its badness illuminates a larger point. (Via @astroparticle.)

Ormell’s thesis is laid out at the start:

Mathematics tends to be both misunderstood and credited with magic powers, especially by those who are intelligent but not mathematically inclined. Arising from this, there is a perennial temptation for mathematicians to play to the gallery and to assume the role of magicians and, even more temptingly, high priests.

The worrisome sign here is not the explicit content, which is vague enough to be unobjectionable, but the gleeful indulgence in overgeneralization. It seems clear that what we are in for is a broad denunciation of mathematicians and their ilk, not a nuanced appreciation. The devil will be in the details.

We have seen blind faith in mathematics in action recently. In addition to the contribution of mathematical models to the great credit crunch of 2008, take physicist Stephen Hawking’s claim that philosophy is dead. The reason he gave was that philosophers have stopped bothering trying to understand modern mathematical cosmology. This cosmology is based on current mathematical physics, most of which has been in place for less than 100 years. It is an impressive edifice of concepts and mathematical models, but one that has not yet built up a track record for reliability over a thousand years, let alone a million years.

Hawking’s claim that philosophy is dead was silly, but not nearly as silly as this. I’m not sure what the implication is supposed to be — we shouldn’t trust sciences that don’t have a thousand-year track record of reliability? Since that includes almost all of contemporary science except for a bit of astronomy, we’d be living in primitive circumstances indeed. The “million years” criterion is even more awesome. That means we shouldn’t trust things like “writing,” or for that matter “human beings.”

Around 1900, various theorists wondered whether the velocity of light might be slowly changing. It was a pertinent question. If the velocity of light was changing very slowly, many of our astronomical calculations would have to go into the bin. Perhaps the gravitational constant might be slowly changing: that, too, would throw our calculations out.

I don’t know what Ormell is talking about here. I suppose it’s possible that there were people around 1900 who were questioning the constancy of the speed of light; I don’t know all of the relevant history. But certainly the famous part of the story was that people were looking for direction-dependent differences in the speed of light due to our motion through the aether, not a slow secular change.

Instead it was discovered that light does not travel in absolutely straight lines, but bends slightly due to the Earth’s gravitation. It is a minute effect and detectable only with great difficulty, but its consequences are deadly. If this degree of bending occurred in outer space, the light from the nearest star would have completed a circular trajectory on its way from its source to our telescopes.

There is some truth here, wrapped in the colorful garments of severe misunderstanding. Yes, light does not travel in absolutely straight lines. But this has nothing to do with its speed, only its direction. And it wasn’t simply “discovered”; it was first predicted by Einstein, and then verified by Eddington and others. (All of whom used math, by the way.) Its consequences are certainly not deadly. The bending is caused by gravity, which is not uniform through space, so the business about circular trajectories is just some earnest babbling.

But, but, but…we all tend to be quite sure that this is only fanciful thinking. Of course light travels in straight lines in outer space! We all have a degree of blind faith in mathematics. We have no reason to believe that merely travelling through space will cause light to bend, though the presence of dark matter in space would have a bending effect if that matter were not absolutely uniformly distributed. (As we know almost nothing about dark matter, it is a material assumption to suppose that it is absolutely uniformly distributed.)

Everything here is entertainingly wrong. Nobody who understands the subject believes this is fanciful thinking; we’re quite convinced of the reality of gravitational lensing. That is because of, not despite, our knowledge of mathematics. And dark matter is certainly not uniformly distributed; if it were, we wouldn’t have been able to find it in galaxies and clusters. Its non-uniformness is kind of the point. It’s very hard to tell what he is even trying to say here.

But now we reach the culmination.

So were the values of c and G – the speed of light and the gravitational constant – the same a million years ago as they are today? It must surely stick in the throat to say that we are quite sure about this. At one time people thought that the magnetic poles were absolutely fixed. Now we know that they are on the move.

Ah, yes. “Once some people were wrong about something, so there’s no reason to believe anything.” One of my favorite arguments.

So let’s admit that there is a minute element of doubt here, say 1 per cent. If so, we can be 99 per cent sure that these physical constants were the same 1,000,000 years ago as they are today. If we follow logic – and being blindly sure that the mathematics is right should hardly incline us to reject logic – then we can be only (0.99)2 x 100 per cent sure that these constants were the same 2,000,000 years ago.

And (0.99)100 x 100 per cent sure that they were the same 100,000,000 years ago. This is 36.6 per cent, a reasonable figure perhaps, given that 100,000,000 years ago was a long time ago.

Cosmologists assure us that the Big Bang happened 13.7 billion years ago, that is, 13,700 million years ago.

So what is the figure for the degree to which we can be sure that the constants c and G were the same then? Clearly it is (0.99)13,700 x 100 per cent, which comes out as 1.59 x 10-58 per cent.

One hardly knows where to begin. We could point out that dimensionful quantities like c and G can’t really change on their own, only with respect to some other dimensionful quantities — only dimensionless ratios are observable. But physicists often speak this way out of laziness, so we will cut Ormell some slack. Obviously there is no reason to attach at 1 percent uncertainty to the value of G a million years ago — that was pulled out of some orifice or another. The assumption that such an uncertainty can be independently assigned to every million-year period in the history of the universe is so woefully wrong that the feelings it conjures are closer to compassion than annoyance. (Don’t skip over “being blindly sure that the mathematics is right should hardly incline us to reject logic.” That’s a rare gem; treasure it.)

But the real point is this: after pushing around these nonsensical numbers, Ornell concludes with a flourish that we have essentially zero reason to believe that Newton’s constant of gravitation had the same value in the early universe that it has today. Put aside for the moment the question of whether this is “right” or “wrong.” The thing I want to highlight is: does he really believe this is something professional physicists have never thought of?

That’s the irksome bit. I am a firm believer that this is everyone’s universe, and every person has an equal right to think and theorize about it, regardless of their credentials or education. But when you are tempted to take those musings seriously — enough to write them up in an essay and present the results proudly in Times Higher Education — a reality check is in order. Scientists aren’t right about everything (far from it), but they do spend a lot of time thinking about these things. In order for them to have never considered a possibility like this, the whole lot of them would have to be in the grips of an extremely anti-scientific blindness to alternative possibilities. Which I suppose is Ormell’s thesis, but he is only proving that he believes it, not that it is true.

Nobody is harder on scientific theories than scientists are. That’s what we do. You don’t become a successful scientist by licking the metaphorical boots of Einstein or Darwin or Newton; you hit the jackpot by pushing them off their pedestals. Every one of us would love to discover that all of our best theories are wrong, either by doing an astonishing experiment or coming up with an unexpectedly clever theory. The reason why we have the right to put some degree of confidence in well-established models is that such a model must have survived decades of impolite prodding and skeptical critiques by hundreds of experts.

Questioning whether Newton’s constant G is really constant (with respect to some other measurable quantities) is an old game, going back at least to Paul Dirac. Many contemporary cosmologists have used data from the early universe to constrain any such variation — I’ve even dabbled in it myself. The answer is that the value of G one minute after the Big Bang is within ten percent of its value today. That’s from data, not theory; if it weren’t true, the expansion rate of the universe during primordial nucleosynthesis would have been different, and we wouldn’t correctly predict the abundance of helium that we observe in the universe.

The point of which is this: if your thesis requires that generations of scientists have completely missed some idea that, when you sit and think about it, is really pretty frikkin’ obvious — maybe you should do a little homework before using it as a jumping-off point for a rant about the intellectual shortcomings of others.

Ormell goes on to say more silly things. Apparently he has demonstrated that Goedel’s theorem undermines Cantor’s discovery of transfinite numbers. I’m sure the mathematics journals are eagerly awaiting his upcoming papers on this important result.

The irony here is that Ormell is trying to teach us two lessons: we need better mathematics education, and we should be less arrogant. Those are indeed good lessons to learn.

36 Comments

36 thoughts on “Scientists Aren’t Always Complete Idiots”

  1. Pingback: Hard Decisions, Easy Targets « In the Dark

  2. Pingback: A Little Knowledge … « Letters to Nature

  3. In math reductio ad absurdum works because the math exists solely as the sum of currently accepted axioms. To overturn or adjust an axiom you have to do more than produce a novel hypothesis that results in a contradiction (or you have to develop a new branch in which the axioms are consistent and your hypothesis holds- but that’s usually trivial when only dealing with a mathematical abstraction, when observed nature is contradicting your axioms then there is an imperative to change the math). In the real world the method doesn’t work that way because we know we don’t know all axioms of nature and science is about discovering them. Math can provide a completely stable platform from which to do science, but it is inevitable that the platform of math will have to be extended at some point, so it can be said that it is not perfect and complete.

  4. Ah, that was a delightful gem to read 🙂 I will sleep well tonight, knowing that humanity can still deliver us such ridiculousness.

  5. Ha! His calculations on uncertainty remind me of how Douglas Adams described the population of the Universe: 0.

    There are a large number of inhabited planets in the Universe, but a much greater number of uninhabited ones, therefore, if you divide the populations on the inhabited planets by that of the number of overall planets, you get an average planet population approaching 0 (well, we may as well call it 0) – add up all those 0s and you get 0. Therefore the population of the Universe is 0.

    Nice critique. 🙂

  6. Pingback: How not to criticise scientists « Hyper tiling

  7. To convince us of the small probability that c and G have not changed, he uses—drum roll, please—mathematics (albeit wrongly), which he is trying to criticise. Reminds me of Capra using reductionism to show that quantum mechanics leads to a breakdown of reductionism in the macroscopic world.

  8. There’s nothing more satisfying than reading a devastating smackdown being delivered to someone who desperately needs it. Thanks for this.

    Although I do have one minor thought: using mathematics against itself isn’t such a dumb idea on its face. If you do some math and come up with a nonsensical result… then there’s a problem with the math. Of course, in this case the problem with the math is not so much the underlying math but operator error on the part of the author. But using math to bring about the downfall of math – isn’t that what Goedel’s work was about?

  9. Ormell really needed a firm response from the side of responsible thinkers about his mediocre mockery and your post is right on the bulls eye.
    The most ridiculous aspect of his argument was the use of math to defy math itself. Its funny that he can’t even step an inch aside of logic and does not hesitate to question the philosophy of the accuracy of logic.

Comments are closed.

Scroll to Top