## When it all goes horribly wrong: the mathematics of natural disasters

Here’s a scene you never see in the movies. A volcano has erupted in the middle of the city; lava is pouring down the street; tall buildings are shuddering as the ground turns to quicksand; and the mayor is yelling down the phone “Get me a mathematician, now!”

In fact, mathematicians, and the tools we design, will already be at work. For example, when an earthquake strikes anywhere in the world, computers use the latest inverse methods to process seismic data and pinpoint the source (see this simplified account from the US Geological Survey). If the quake is underwater, fluid dynamical models are used to predict whether a tsunami will result. Thanks to modern numerical analysis, these models are fast enough to allow beaches to be evacuated before the wave strikes. Meanwhile, the emergency services are deploying on the basis of statistical hazard assessments, and using insights from graph theory and queueing theory to plan how to restore power and communications. Mathematicians may not be on the front line, but without us the aftermath of disasters would be far worse.

Although these topics lie at the frontiers of research, during your degree you will meet many of the ideas behind them.

One theme you may meet is wave theory: how oscillations travel through media such as the ocean or the solid earth. Often, energy travels away from the source at a particular speed c: calculating what c depends on is hard, but once we know it we can draw simple conclusions. For example, the speed of “primary” seismic waves through the earth’s crust is given by

$c = \sqrt{\displaystyle\frac{K+\frac{4}{3}\mu}{\rho}}$,

where $\rho$ is the density of the rock and where K and $\mu$ measure the stiffness of the crust. Crucially, c does not depend on the shape or the size of the waves. In places like California, where the crust is young and warm, K and $\mu$ are small, so even huge earthquakes do not travel very far or fast. In places like the eastern US, where the crust is colder and stiffer, waves can travel much further and faster — so the recent Virginia earthquake was felt as far away as Canada.

We can also understand why some tsunamis do so much damage. Suppose that an earthquake releases a given amount of energy E at a single point, causing a tsunami. The wave then spreads out in an expanding circle at speed c (now given by a different formula from that for seismic waves). After a time t this circle has radius r = ct: the energy is shared around the entire circumference, so it must be spread around a distance $2\pi r$. The energy density, which is related to the wave size, is proportional to $E/(2\pi r)$ — so the further away we are, the safer we are.

But now suppose that the energy is released along a line rather than at a point, as in the 2004 “Boxing Day” tsunami. Instead of spreading out as a circle, such waves travel perpendicularly to the original line. The lengths of these waves do not increase with distance from the source, so their size does not decrease: this is why Sri Lanka and India were so badly affected in 2004.

Another theme, from a completely different area of maths, is risk assessment. For example, given the record of eruptions in Iceland over the last two centuries, how likely is it that another eruption will affect the British Isles in the next twenty years? The problem is that the events we are interested in are much larger than “average” and also very rare. You are probably familiar with the normal distribution, with probability density

$f(x) = \displaystyle\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\displaystyle\frac{(x-\mu)^2}{2\sigma^2}\right)$

where $\mu$ and $\sigma^2$ are the mean and variance. If probability is normally distributed, the chance of a very large event (x much greater than $\mu$) decreases rapidly as x increases: the chance that $x > \mu+3\sigma$ is roughly 0.13%, while the chance that $x > \mu + 4\sigma$ is roughly 0.0032%.

In practice, the statistics of many events, including natural disasters, follow “heavy-tailed” distributions where the chance of a big event drops off much more slowly. For example, in a Pareto distribution the probability density is

$f(x) = \alpha\displaystyle\frac{x_m^{\alpha}}{x^{\alpha+1}}$,

where $\alpha$ and $x_m$ are related to the mean and variance. If, for example, we take $\alpha = 3$, we find that the chance that $x > \mu+3\sigma$ is roughly 1.4% (ten times larger than before); the chance that $x > \mu + 4\sigma$ is roughly 0.82% (over 250 times larger than before). Choosing the correct probability distribution for hazard forecasting is a difficult task for statisticians; persuading governments that these “technical” details are crucial can be harder still!

So, next time someone asks you what use maths is, tell them you can use it to understand earthquakes and decide whether volcanoes are worth planning for. You may get some sceptical looks, but admit it… doesn’t it sound more interesting than accountancy?

(DP)