*Useless Arithmetic: Why Environmental Scientists Can’t Predict The Future*, by O. H. Pilkey & L. Pilkey-Jarvis. Columbia University Press, 2007.

Mathematical models — equations and systems of equations used to predict the behaviour of entities in the real world — seem to be everywhere. They may be described as “mathematical”, “computational” or “quantitative”, and the equations may be displayed on T-shirts or buried within computer codes, but however they’re described such models are used to describe and predict the behaviour of systems ranging from the global economy to the beaches of Louisiana and the North Sea cod population. As mathematicians we tend to approve of this: the ubiquity of models seems to justify our own existence (witness the home page of our own Department), and thus it must be a good thing and one not to be questioned too loudly. But not everybody shares this opinion.

The geologist authors of *Useless Arithmetic*, Orrin H. Pilkey and Linda Pilkey-Jarvis, certainly don’t. In their book, they attack both the use of models in environmental science and the people who develop and deploy these models. Modellers are “a priesthood”, using the prestige of mathematics to lend credibility to predictive methods that aren’t much better than scrutinising the entrails of a chicken. The models, meanwhile, are over-simplified, dangerously easy to tweak to suit the modellers’ paymasters, and shielded by mathematical impenetrability from public scrutiny.

Perhaps I should say not just “the models” but “our models”, because one of the Pilkeys’ particular targets is the modelling of coastal sediment transport, on which I spent much of my own research career. As the recent furore over the Lousiana berms showed, coastal management is an area in which passions run high, uncertainties are rife, and money, sentiment and politics often talk louder than technical expertise. That being the case, my immediate reaction was defensive: wasn’t the Pilkeys’ attack simply the fulminations of those who’ve been left behind by the advance of quantitative science and whose mistrust of mathematics is based on ignorance? This reaction, though, does them and their book a disservice.

The value of *Useless Arithmetic* is that it supplies some highly readable case studies of how mathematical models are misused; in particular when they are misused simply because nothing else will provide a quantitative answer of the kind that courts and clients want. In coastal engineering, such abuses may involve the use of rules (such as the Bruun Rule for cross-shore beach profiles) that are too simple to accommodate natural variation, or they may involve intricate numerical models involving so many fitted parameters that in practice they’re woefully underconstrained. In principle, the mathematical form of these models should make it possible to analyse their deficiencies and uncertainties systematically — and there are some excellent papers (e.g. Eidsvik 2004) which do so. In practice, such analyses are underpublicised: when our employers don’t want uncertainty, it seems, we don’t feel under much obligation to explore it.

Similar problems are identified in the models that underlie fisheries management, groundwater contamination assessment, and invasive species control. The common theme seems to be a failure to appreciate the single most important rule of applied logic: no conclusion is better than its premises, or in computing terms, “garbage in, garbage out”. One might hope that mathematicians would be less inclined to make this error than some of the engineers and consultants who deploy mathematical models, but there are enough cases of hubris to undermine such hope.

Like most polemics, the book is better at identifying problems than at proposing plausible solutions. It’s disappointing that the Pilkeys omit any discussion of meteorology — surely the most mature field of mathematical environmental modelling — and the lessons that could be learned from it. For example, one could usefully discuss the possibilities of model ensembles, where the results from models that employ different data, or different parameterisations of “sub-grid-scale” processes, are compared or combined into a probabilistic forecast. Another lesson to be learned from meteorology would concern importance of data, and the subtleties involved in not merely collecting them but also assimilating them into models.

The other flaw is that the Pilkeys rarely turn their scepticism fully on the alternatives to mathematical modelling. “Qualitative models” and “engineering judgement” have their merits, and perhaps are not subject to the same abuses as quantitative models; but this doesn’t mean that they are particularly accurate themselves, and it is harder, not easier, to scrutinise the assumptions behind them. In perhaps the most disappointing lapse of scepticism, having earlier poured scorn on “hindcasting” as a means of model validation, the authors praise William Gray of Colorado State University for developing seasonal hurricane predictions based on a statistical correlation with West African rainfall data — a correlation that was never explained, and that ceased to be reliable in 1995 for equally unexplained reasons. Apparently he now has a new prediction method, but without an understanding of why it should work, how much trust can we put in it? Given a choice of mysteries, I generally prefer the more mathematical one: the Pilkeys prefer the less.

These flaws aside, *Useless Arithmetic* is worth reading, perhaps in proportion to the professional discomfort that it causes. The point of mathematics is supposed to be that it wears its logic openly; that it relies on consistent reasoning and evident assumptions throughout. If, as in several of the case studies quoted by the Pilkeys, “mathematical” modelling loses these mathematical virtues, then it does indeed become “useless arithmetic”. Equally, though, there would be no sense in rejecting mathematical modelling outright as the dogma of a perverted priesthood. That way lies the barren “scepticism” of so many climate contrarians, and for that matter of the circle-squarers, flat-earthers and conspiracy theorists through the centuries.

“Guns don’t kill people”, the slogan runs, “people do. Don’t ban guns: ban people!” Perhaps analogously, the misuse of mathematical models is due to a flaw in people: our desire for more certainty about the world than we can ever possess. The difference is that laying down mathematical models would leave our society far more helpless than laying down guns could ever leave the NRA. As mathematical modellers, we shouldn’t be downing tools in despair at the Pilkeys’ criticisms, but we should accept the duty that they leave us. All models have limitations: as experts in the language in which these models are written, we need to ensure that the users and the designers of models don’t allow that language to obscure those limits.

(DP)

On a very similar theme, BBC Radio 4 has recently broadcast a programme looking at the role of mathematical models (specifically the Black-Scholes formula for option trading) in precipitating recent financial crises. There’s a summary on the BBC News website, and you can also download the programme as a podcast.

Ironically the Black-Scholes model is probably the best and most predictive mathematical model that social sciences have ever produced. Black-Scholes is modelling done right.

I don’t have any problem with mathematical models. I have a problem with mathematical modelers! The ultimate fact is that most mathematical models are shit. The problem with modelers is that they pass off shit as sugar. And when you point out its shit they tell you “well its the best we have”. This is what makes me furious

Is there by any chance an unfortunate personal experience lurking behind these generalisations? Yes, most mathematical models are wrong (as are most models of any kind, quantitative or not), but if you’ve been dealing exclusively with modellers who refuse to accept this then I suspect you’ve been spending time in an unusually charlatan-laden field of study. Nothing about the ability to manipulate equations prevents somebody from being a BS merchant, but my own experience is that the more someone actually knows about mathematical modelling, the less likely they are to lie to you about their own model and its limitations.

Pingback: Garbage in, garbage out: follow-ups | Degree of Freedom

I assume the header is a fractal image.

The motto should really be “approximation in, approximation out”. Meteorology is a good example of the limitations imposed by the accuracy of the input data, and the fact that you can get usable results by continually updating the data and re-running the model. The difference with most other models of dynamic situations is that they’re run once (perhaps with a range of input data to check the sensitivity of the results to different parameters) but after that people just wait to see what happens and whether it matches the predictions or not, despite the fact that everyone knows that accuracy diminishes the further forward you look.

Static problems are different. As an engineer I’ve spent most of my life making decisions based on insufficient information. For example, what will be the loading on an office floor over the next 120 years? Answer: no idea. But with a judicious use of statistical analysis of loadings, safety factors, and revisiting situations when circumstances change you can nevertheless devise viable solutions.

Thanks for providing an engineer’s perspective on this! The header is indeed a fractal image (part of a Julia set) — see the “About” page for more information…

Pingback: Quotation for the week: Aristotle | Degree of Freedom

Pingback: The artfulness of maps | Degree of Freedom

Pingback: Lobby correspondences | New-cleckit dominie