We can blame ‘the butterfly effect,’ writes David Orrell, but the truth is we’re just very bad at predicting the future
The Canadian ice hockey player Wayne Gretzky once said, “A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be.”
Businesses and societies try to perform a similar trick, through forecasting. We seem to have a genetic urge to look into the future, to see around the corner, to guess where the puck is going. In fact even simple bacteria have genetic networks that are used to predict the location of food, and move towards it.
Traditionally the domain of religions, astrologers, and mystics, the role of prognosticator-in-chief has now passed to scientists, who use highly complex mathematical models to predict the weather, the spread of diseases, the economy, and much else – positioning ourselves better for whatever is coming up.
Unfortunately, their track record is less impressive than Gretzky’s.
Consider for example the UK Met Office, which last year decided to suspend their seasonal forecasts after predictions of a “barbeque summer,” which never ignited, followed by a “mild winter,” which turned out to be the coldest in three decades.
Admittedly, the Met Office did say that its inherently probabilistic seasonal forecasts were not suitable for “decision making.” And if it is any consolation, the track record in other areas is no better.
There was the widely-publicised swine flu pandemic in 2009, which never panned out – good news for everyone, including the drug companies that sold massive stocks of unused vaccine. Or the credit crunch of 2007-8, which bit harder and deeper than the scores of highly-paid professionals who work in the financial sector had ever foreseen.
How many businesses have predicted a barbeque summer, only to end up in a snow drift? And don’t get started on natural disasters. So why is it that, despite the vast increases in computational power and data quality over the past few decades, we still seem to be no better at prediction?
Complex mechanisms; simple models
Weather forecasters have traditionally explained away their inability to predict past a few days, by pointing to the butterfly effect. Fed Chairman Ben Bernanke used the same principle in a speech to explain why even the most elaborate economic models consistently get it wrong: “In a sufficiently complex system, a small cause – the flapping of a butterfly’s wings in Brazil – might conceivably have a disproportionately large effect – a typhoon in the Pacific.” Or a world recession.
However, while the world may seem a little unstable, it’s not that unstable. As I argue in my book The Future of Everything: The Science of Prediction, the butterfly effect is a myth – when butterflies flap their wings, the perturbation to the atmosphere quickly dies out (to get an idea, try waving your hands a foot in front of your face – does it feel like the effect is growing exponentially?). A simpler (if less flattering for forecasters) explanation for forecast error is that mathematical models are just not very good at simulating complex dynamical systems like the atmosphere, the human body, or the economy.
Such systems are dominated by emergent features, which by definition cannot be reduced to a simple set of equations. For example, we know a lot about the components of a cloud – air, small particles, and water vapour – but we still can’t produce one on a computer. Just as we can’t predict in advance the emergent behaviour of a group of humans when they gather in a market to buy or sell stocks; or the emergent behaviour of a few pig genes when they are packaged together in a new swine flu virus; or the emergent effect of a new technology on a business sector.
Complex, organic systems are also characterised by interlocking feedback loops, which allow for rapid but regulated response, but also make models prone to wandering off in the wrong direction. One could say that the systems have evolved in such a way that they elude practical prediction.
Our proven inability to predict obviously has implications for how we make policy or conduct business. Forecasts may not be suitable for making decisions, but that is exactly what we have to do. The only alternative is to react to events after they have occurred. So how should we plan our businesses or regulate economies, when forecasts often seem to become invalid almost as quickly as they
One approach is known as scenario forecasting. This was first developed during the Cold War to work out different scenarios in the event of a nuclear conflict. Its use in business was pioneered by Shell, who credited it with preparing them for the oil price shocks of the 1970s, and it is becoming increasingly widely used.
Usually a small number of scenarios, such as two to four, are chosen to represent extreme cases. This helps separate the scenarios from each other (and also accounts for the fact that the future often does turn out to be extreme). As Adam Gordon points out in his book Future Savvy, the goal of scenarios is not so much to make accurate predictions, but rather to “reach for the storytelling, narrative tradition in human cognition, which is an ancient way of capturing the imagination, educating ourselves and others, and grappling with the complex tapestry of life.”
By opening our minds to different possibilities, scenario forecasting can help an organisation choose strategies to account for possible future developments or unexpected shocks. And there, we can take some lessons from biology. Species which cannot survive the occasional extreme shock tend to be filtered out by evolution. That is why organisms such as ourselves tend to have a high degree of redundacy – such as our two kidneys – and evolve sophisticated immune systems. Our bodies have been doing their own version of scenario forecasting for a while, and we can learn from it.
Unlike Wayne Gretzsky, we may not be able to position ourselves perfectly for the future – but at least we can improve our chances that we’ll be around to witness it.