Featured image: a scientist buries a seismic station to record seismic waves in the area of Lake Baikal, Russia. ⓒ Anastasiia Starikova.
Earthquakes have the potential to be incredibly damaging, both through its primary effects and secondary, including landslides, liquefaction, tsunamis and collapsing buildings. The 2004 ‘Boxing Day’ tsunami in the Indian Ocean killed over 200,000, while the Tohoku earthquake and subsequent tsunami caused an estimated $360bn in damage. Therefore, it is crucial to predict earthquakes – their precise time, location and magnitude – as far in advance as possible to mitigate their impacts.
First, one must understand what seismologists are looking for to predict an earthquake. An earthquake is the release of frictional energy in the Earth’s crust. Convection currents in the mantle cause tectonic plates to move; as they move towards, away from or alongside one another, pressure builds up. Once the stress becomes larger than the strength of the crust, it is released and an earthquake occurs. It is much like dragging something with high friction such as a brick with an elastic band. At first, the brick will be unmoved but eventually, it will jolt forward, and the process will begin again.
The History of Prediction
Past views of how to predict earthquakes were, succinctly, that it was impossible. Charles Richter was a titan of the field, most notably created the Richter scale for measuring earthquake magnitude. In 1977, he said, “Journalists and the general public rush to any suggestion of earthquake prediction like hogs towards a full trough… prediction provides a happy hunting ground for amateurs, cranks, and outright publicity-seeking fakers.” Furthermore, in 1997 Robert Geller wrote a paper titled Earthquakes Cannot Be Predicted, in which he argues that there are too many factors that influence the origins of earthquakes for it to be possible to predict them with any degree of accuracy.
Some believe that this pessimism has resulted in a lack of funding and resources for efforts at earthquake prediction. If not for this outlook, prediction technology might have been better funded and researched, and more positive results may have been yielded – insofar as it is possible to predict earthquakes.
As of now, earthquake predictions are limited in their scope, accuracy and therefore their utility. Immediately before earthquakes occur there are often small tremors, however these provide only minutes of warning. They can be useful for tsunamis, however, as the time taken for the waves to reach shore can give enough time for evacuation.
Seismologists can calculate the odds that an earthquake will strike a broad geographic area over the long term; for example, there is a 67% chance of a high magnitude earthquake in the San Francisco Bay in the next 30 years. This information, while useful for insurance providers, does little to enhance the safety of residents. Even the most unambitious designations are not always correct; the aforementioned Tohoku earthquake took place in an area categorised as ‘safe’.
Indeed, as Robert Geller argued, the difficulty of predicting earthquakes is inherent, not just due to poor technology or science. Firstly, it is hard to measure disturbances so far below ground, as earthquakes can occur up to 800km below the surface. Predicting the exact nature of an earthquake (i.e. time, magnitude, location), not just a binary of whether it happens or not, is even more difficult due to the vast number of contributing factors, including, but not limited to, the fault direction and position and the level of stress on the rock.
One popularised theory is that strange animal behaviour can be used to predict earthquakes. Anecdotes dating back to 373 BC suggest that animals leave weeks before earthquakes occur. Some suggested that they migrate as a result of foreshocks that are too small for humans to notice. This is unlikely given the extreme precision of current scientific instruments. Furthermore, even if these foreshocks were in fact present and could be detected, they still predict fairly little: a large foreshock does not equal a large earthquake, and vice versa. Regardless, a 2018 study with 130 species found that there was no conclusive evidence for the animal theory.
Another, more credible hypothesis is that radon’s release from the ground correlates with earthquakes. This is because, before a main rupture, radon gas sometimes seeps from smaller preceding fractures in the rocks. However, radon emissions are common enough that the relationship cannot be used for prediction; that is, earthquakes are preceded by radon emissions, but radon emissions are not necessarily followed by earthquakes. Another theory focused on the production of electromagnetic waves prior to earthquakes, but, similarly to radon, the relationship has not proven to be indicative.
While initial earthquakes have proved difficult, the prediction of aftershocks has been more productive. This is far from an unimportant exercise. Despite the term ‘aftershock’ suggesting reduced severity, this is not the case. For example, the 2011 Christchurch earthquake, 6.3 on the Richter scale, was actually an aftershock following 6 months from the original. 183 died and there was $20-30bn in damages. Aftershocks have some predictable qualities: 10 days after an earthquake the frequency of aftershocks falls by a factor of 10; after 100 days, by a factor of 100. However, the magnitude of aftershocks remains the same. This can give useful indications on the likelihood of an aftershock.
A reliable method to predict earthquakes would be the biggest discovery in the recent history of earth sciences. The destruction wrought by earthquakes is enormous: approximately 20,00 people die annually from earthquakes, and every large earthquake costs an average of $2.1bn for recovery. This could be drastically reduced with the right technology.
Currently, hopes rely on machine learning. Research in America using AI pattern recognition to study earthquakes is currently ongoing. The computers can measure the seismic energy radiated from tectonic fault motion. It can then separate the acoustic emissions and, hopefully, recognise trends. The research also uses small scale rocks to study friction, and therefore how much pressure builds up before a slide (an earthquake) much like the brick and elastic band. These laboratory tests have shown parallels to the actual situation at fault zones. To gather new evidence, the group of researchers travelled to Vancouver Island to look at slow slip events, which are basically earthquakes that occur over weeks. They then ‘trained’ the algorithm on 20 slip events and made the computer predict others. The result was very similar to the real-life events. However, there are questions as to whether this can be upscaled due to the cost and complexity of the machinery. There is also not yet enough data to be sure of its genuine predictive capabilities, but it is certainly promising.