The physicist Niels Bohr is attributed with the saying that “Prediction is very difficult, especially about the future.” Indeed, it may be that when it comes to the economy, forecasting is even harder than Bohr’s specialty of quantum mechanics.
Economic forecasters have not distinguished themselves in recent years – or, really, ever – with their ability to foresee events. As Adair Turner pointed out, “Modern macroeconomics and finance theory failed to provide us with any forewarning of the 2008 financial crisis.” That was the case even at the actual time of the 2008 financial crisis. According to a study by IMF economists, the consensus of forecasters in 2008 was that not one of 77 countries considered would be in recession the next year (49 of them were).
The idea that the great financial crisis was a shining example of perfect market equilibrium will seem counterintuitive to many
Traditionally, these difficulties have been blamed, not on the complexity of the world economic system, but on efficiency. According to Eugene Fama’s Efficient Market Hypothesis, markets are drawn to a stable, optimal equilibrium. Price changes are the result of rational investors reacting to random events that by definition cannot be predicted. However it is still possible, in theory, to make probabilistic predictions. Risk analysis tools such as Value at Risk (VaR), are used to compute the chances of a portfolio losing a certain amount.
The idea that the Great Financial Crisis was a shining example of perfect market equilibrium will seem counterintuitive to many. But the theory does give an excuse for all those missed forecasts. In fact, they just confirm its correctness. According to Fama, the efficient market hypothesis “did quite well in this episode”, and when asked if the crisis would lead to any changes in economics, he replied: “I don’t see any.” This impression was no doubt backed up when his work was awarded the economics version of a Nobel Prize in 2013.
Crisis of prediction
There has been little interest, at least from the mainstream, in reforming or replacing the fundamental tenets of economics. But what if prediction error is due, not to things such as stability, rationality, and efficiency, but to their opposite?
At the start of the last century, physics was faced with an even more serious crisis of prediction. Physicists knew that as an object – say the filament in a light bulb – was warmed, it would radiate energy – some in the form of visible light. According to theory, the energy would be channelled into short wavelengths of infinite power, and anyone turning on the light would be instantly annihilated – the so-called ultraviolet catastrophe.
The dominant theory of the atom was Rutherford’s ‘solar system’ model, in which a number of negatively charged electrons circle round a central, positively charged nucleus, like planets around the sun. But according to Maxwell’s equations, the circulating electrons would produce an electromagnetic wave, just as electrons in an antenna create a radio signal. They would radiate away their energy, slow down, and smash into the nucleus, all within less than a billionth of a second.
These problems were solved by going back to the drawing board, and questioning the basic assumptions of physics. The German physicist Max Planck found that he could correctly model the radiation from glowing objects so long as he assumed that the energy of light could only be transmitted in discrete units, which he called quanta, rather than in a continuous range. And Bohr came up with a quantised version of the atom in which attributes such as the energy of electrons only existed as multiples of some basic unit.
Rather than being smooth and continuous, it seemed, the universe was jumpy and discrete. And according to Heisenberg’s uncertainty principle, you could never measure both the exact position or momentum of an object – only the probability that it was in a certain state. Quantum mechanics therefore used probabilistic wave functions to describe the state of matter at the level of the atom.
Since economics had long modelled itself after Newtonian physics, one might expect that the quantum revolution would have led to a fundamental change in economics. And in a way, it did. Unfortunately it was in the wrong direction.
Going random
The success of quantum physics, and in particular its role in nuclear weapons, meany that funding poured to weapons laboratories and universities around the world to develop new techniques for analysing probabilistic systems.
Some of this effort spilled over into economics and finance. Probabilistic techniques such as Monte Carlo simulations, used regularly in financial modelling, had its genesis in atomic physics. The famous Black-Scholes equation for option pricing can be rephrased as a probabilistic wave function.
Unfortunately all of these techniques failed during the crisis. Part of the reason is that this shift to probabilistic predictions was taking exactly the wrong lesson from quantum physics. Instead of fundamentally questioning the assumptions of economics, which were always based on a naively mechanistic view of human behaviour, economists just patched a random element onto their existing model of a stable, optimal system. The result was the same as if physicists had just tinkered with their ‘solar system’ model of the atom, instead of completely rethinking it.
Today, instead of having the ultraviolet catastrophe, there is the opposite: all the lights went out, and no one can find the switch. The field needs a proper quantum revolution of its own. A first step is to acknowledge the quantum nature of money.
Just like subatomic particles, money has dualistic properties that elude exact prediction – and because the economy is based on money, prices are fundamentally unpredictable as well. Plus, people are involved, and they are even less reliable in their behaviour than electrons. Of course, this will mean that old ideas will have to be completely rethought, which no one likes to do. But again on the bright side: it’s an even better excuse for forecast error.