Are transport forecasting models accurate enough?

While many predictions are made on the future of the economy, finding reliable transport forecasting models can prove a logistical nightmare. No matter how well researched, it is still just that – a prediction

 
Author: David Orrell
February 12, 2014

A frequent topic of this column has been the difficulty – or impossibility – of predicting the future state of the economy. This is true for macroeconomic quantities such as inflation or GDP, but it also applies to things like predicting the economic benefits of particular projects. A good example is transport forecasting.

Forecasters have long been building sophisticated mathematical models of the transport system, which are supposed to predict everything from the flow of traffic in a city in a decade’s time, to the impact of an individual project such as building a bridge or a railway.

Overestimating the figures
While such models are useful tools for thinking about the transport system, their results when compared with outcomes show that accurate transport forecasting remains elusive. A 2006 study by the team of Danish economic geographer Bent Flyvbjerg – who is currently at Oxford University’s Saïd Business School – looked at over 210 projects in 14 countries, and found striking discrepancies between passenger forecasts and measured results.

For rail projects, passenger numbers were overestimated in 90 percent of cases, with an average overestimation of 106 percent

For rail projects, passenger numbers were overestimated in 90 percent of cases, with an average overestimation of 106 percent. Transport forecasts were more accurate for road projects, but half had a difference between actual and forecasted traffic of more than +/-20 percent, and in a quarter of cases the difference was more than +/-40 percent. Nor had forecast accuracy improved with time, or with more advanced models or computer power.

The forecast error is due to a number of factors. For rail projects, it seems that politics is important – passenger demand is overestimated because stakeholders want the project to go ahead. A safe prediction, based on past experience, is that projected numbers for England’s HS2 high-speed rail will turn out to have been significantly overestimated.

Road projects do not show the same systematic bias, so the error is more likely due to model limitations, such as inaccurate estimates of trip generation (based on incomplete data) and land-use development (based on uncertain plans and projections); as well as phenomena such as ‘assumption drag’ – maintaining assumptions even after they have lost their validity.

The assumption drag
As an example of assumption drag, Flyvbjerg et al noted that in Denmark, the energy crises of 1973 and 1979 led to increases in petrol prices and decreases in real wages. As a result, traffic declined markedly for the first time in decades. Believing that the trend would continue, Danish traffic forecasters adjusted their models accordingly. Instead, once the effects of these shocks had worn off, traffic boomed again in the 1980s, ‘rendering forecasts made on 1970s assumptions highly inaccurate.’

Of course, it isn’t just transport forecasters who are affected by assumption drag. As an example, the figure compares the price of oil with forecasts from the US Energy Information Administration (EIA), which are based on their World Oil Refining, Logistics, and Demand (WORLD) model. In the 1980s and 1990s, the forecasts consistently overestimated the price of oil, probably because the model similarly retained a memory of the 1970s energy crises (assumption drag).

The forecasts eventually learned that prices were not going to return to previous levels, and flattened out; but almost as soon as they did, prices spiked briefly to $147. As with travel forecasts, huge improvements in modeling and computational abilities over almost 30 years have had little impact on predictive accuracy.

As Flyvbjerg et al note, the lack of progress in predictive accuracy in recent decades suggests that ‘the most effective means for improving forecasting accuracy is probably not improved models but instead more realistic assumptions and systematic use of empirically based assessment of uncertainty and risk.’ So what alternatives exist, and what is the role of models?

One alternative to traditional model-based forecasting is the method known as ‘reference class forecasting’, which was developed in the 1970s by Daniel Kahneman and Amos Tversky in order to compensate for the cognitive biases which affect economic forecasting. Given a particular project, the first step is to identify a reference class of similar projects; establish a probability distribution for whatever is being predicted, such as changes in behaviour; and finally compare the new project with the class distribution.

A drawback of studying the future is that we don’t have much in the way of data for it

The reference class method relies on the existence of comparable data, which might apply to individual projects, but works less well for bigger questions such as how a city’s economy and infrastructure will evolve in the future. A drawback of studying the future is that we don’t have much in the way of data for it.

Another approach which also attempts to correct for cognitive biases, is the method known as scenario forecasting. Its use in business was pioneered by Shell, who credited it with preparing them for the oil price shocks of the 1970s, and it is becoming increasingly widely used. Usually a small number of scenarios – two to four – are chosen to represent extreme cases.

This helps separate the scenarios from each other (and also accounts for the fact that the future often does turn out to be extreme). Mathematical models can also be used to flesh out the details of the scenarios and check for consistency without masquerading as predictions.

As an example, one project I took part in used this technique to create scenarios for the US Department of Transportation that described what the US transport system might look like in 30 or 40 years time. However, such scenario methods are probably better suited as a way of thinking about general possibilities, rather than for forecasting the success of a specific project.

For those, it may be better to remember John Maynard Keynes’ admission that: ‘If we speak frankly, we have to admit that our basis of knowledge for estimating the yield 10 years hence of a railway, a copper mine, a textile factory, the goodwill of a patent medicine, an Atlantic liner, a building in the City of London amounts to little and sometimes to nothing.’ Of course that is no excuse to stop building railways or inventing medicines – but we should be open about the uncertainties involved, and remember that forecasts tend to rapidly go off the rails.